fbpx
Wikipedia

Multinomial logistic regression

In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes.[1] That is, it is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables (which may be real-valued, binary-valued, categorical-valued, etc.).

Multinomial logistic regression is known by a variety of other names, including polytomous LR,[2][3] multiclass LR, softmax regression, multinomial logit (mlogit), the maximum entropy (MaxEnt) classifier, and the conditional maximum entropy model.[4]

Background edit

Multinomial logistic regression is used when the dependent variable in question is nominal (equivalently categorical, meaning that it falls into any one of a set of categories that cannot be ordered in any meaningful way) and for which there are more than two categories. Some examples would be:

  • Which major will a college student choose, given their grades, stated likes and dislikes, etc.?
  • Which blood type does a person have, given the results of various diagnostic tests?
  • In a hands-free mobile phone dialing application, which person's name was spoken, given various properties of the speech signal?
  • Which candidate will a person vote for, given particular demographic characteristics?
  • Which country will a firm locate an office in, given the characteristics of the firm and of the various candidate countries?

These are all statistical classification problems. They all have in common a dependent variable to be predicted that comes from one of a limited set of items that cannot be meaningfully ordered, as well as a set of independent variables (also known as features, explanators, etc.), which are used to predict the dependent variable. Multinomial logistic regression is a particular solution to classification problems that use a linear combination of the observed features and some problem-specific parameters to estimate the probability of each particular value of the dependent variable. The best values of the parameters for a given problem are usually determined from some training data (e.g. some people for whom both the diagnostic test results and blood types are known, or some examples of known words being spoken).

Assumptions edit

The multinomial logistic model assumes that data are case-specific; that is, each independent variable has a single value for each case. As with other types of regression, there is no need for the independent variables to be statistically independent from each other (unlike, for example, in a naive Bayes classifier); however, collinearity is assumed to be relatively low, as it becomes difficult to differentiate between the impact of several variables if this is not the case.[5]

If the multinomial logit is used to model choices, it relies on the assumption of independence of irrelevant alternatives (IIA), which is not always desirable. This assumption states that the odds of preferring one class over another do not depend on the presence or absence of other "irrelevant" alternatives. For example, the relative probabilities of taking a car or bus to work do not change if a bicycle is added as an additional possibility. This allows the choice of K alternatives to be modeled as a set of K-1 independent binary choices, in which one alternative is chosen as a "pivot" and the other K-1 compared against it, one at a time. The IIA hypothesis is a core hypothesis in rational choice theory; however numerous studies in psychology show that individuals often violate this assumption when making choices. An example of a problem case arises if choices include a car and a blue bus. Suppose the odds ratio between the two is 1 : 1. Now if the option of a red bus is introduced, a person may be indifferent between a red and a blue bus, and hence may exhibit a car : blue bus : red bus odds ratio of 1 : 0.5 : 0.5, thus maintaining a 1 : 1 ratio of car : any bus while adopting a changed car : blue bus ratio of 1 : 0.5. Here the red bus option was not in fact irrelevant, because a red bus was a perfect substitute for a blue bus.

If the multinomial logit is used to model choices, it may in some situations impose too much constraint on the relative preferences between the different alternatives. It is especially important to take into account if the analysis aims to predict how choices would change if one alternative were to disappear (for instance if one political candidate withdraws from a three candidate race). Other models like the nested logit or the multinomial probit may be used in such cases as they allow for violation of the IIA.[6]

Model edit

Introduction edit

There are multiple equivalent ways to describe the mathematical model underlying multinomial logistic regression. This can make it difficult to compare different treatments of the subject in different texts. The article on logistic regression presents a number of equivalent formulations of simple logistic regression, and many of these have analogues in the multinomial logit model.

The idea behind all of them, as in many other statistical classification techniques, is to construct a linear predictor function that constructs a score from a set of weights that are linearly combined with the explanatory variables (features) of a given observation using a dot product:

 

where Xi is the vector of explanatory variables describing observation i, βk is a vector of weights (or regression coefficients) corresponding to outcome k, and score(Xi, k) is the score associated with assigning observation i to category k. In discrete choice theory, where observations represent people and outcomes represent choices, the score is considered the utility associated with person i choosing outcome k. The predicted outcome is the one with the highest score.

The difference between the multinomial logit model and numerous other methods, models, algorithms, etc. with the same basic setup (the perceptron algorithm, support vector machines, linear discriminant analysis, etc.) is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted. In particular, in the multinomial logit model, the score can directly be converted to a probability value, indicating the probability of observation i choosing outcome k given the measured characteristics of the observation. This provides a principled way of incorporating the prediction of a particular multinomial logit model into a larger procedure that may involve multiple such predictions, each with a possibility of error. Without such means of combining predictions, errors tend to multiply. For example, imagine a large predictive model that is broken down into a series of submodels where the prediction of a given submodel is used as the input of another submodel, and that prediction is in turn used as the input into a third submodel, etc. If each submodel has 90% accuracy in its predictions, and there are five submodels in series, then the overall model has only 0.95 = 59% accuracy. If each submodel has 80% accuracy, then overall accuracy drops to 0.85 = 33% accuracy. This issue is known as error propagation and is a serious problem in real-world predictive models, which are usually composed of numerous parts. Predicting probabilities of each possible outcome, rather than simply making a single optimal prediction, is one means of alleviating this issue.[citation needed]

Setup edit

The basic setup is the same as in logistic regression, the only difference being that the dependent variables are categorical rather than binary, i.e. there are K possible outcomes rather than just two. The following description is somewhat shortened; for more details, consult the logistic regression article.

Data points edit

Specifically, it is assumed that we have a series of N observed data points. Each data point i (ranging from 1 to N) consists of a set of M explanatory variables x1,i ... xM,i (also known as independent variables, predictor variables, features, etc.), and an associated categorical outcome Yi (also known as dependent variable, response variable), which can take on one of K possible values. These possible values represent logically separate categories (e.g. different political parties, blood types, etc.), and are often described mathematically by arbitrarily assigning each a number from 1 to K. The explanatory variables and outcome represent observed properties of the data points, and are often thought of as originating in the observations of N "experiments" — although an "experiment" may consist in nothing more than gathering data. The goal of multinomial logistic regression is to construct a model that explains the relationship between the explanatory variables and the outcome, so that the outcome of a new "experiment" can be correctly predicted for a new data point for which the explanatory variables, but not the outcome, are available. In the process, the model attempts to explain the relative effect of differing explanatory variables on the outcome.

Some examples:

  • The observed outcomes are different variants of a disease such as hepatitis (possibly including "no disease" and/or other related diseases) in a set of patients, and the explanatory variables might be characteristics of the patients thought to be pertinent (sex, race, age, blood pressure, outcomes of various liver-function tests, etc.). The goal is then to predict which disease is causing the observed liver-related symptoms in a new patient.
  • The observed outcomes are the party chosen by a set of people in an election, and the explanatory variables are the demographic characteristics of each person (e.g. sex, race, age, income, etc.). The goal is then to predict the likely vote of a new voter with given characteristics.

Linear predictor edit

As in other forms of linear regression, multinomial logistic regression uses a linear predictor function   to predict the probability that observation i has outcome k, of the following form:

 

where   is a regression coefficient associated with the mth explanatory variable and the kth outcome. As explained in the logistic regression article, the regression coefficients and explanatory variables are normally grouped into vectors of size M+1, so that the predictor function can be written more compactly:

 

where   is the set of regression coefficients associated with outcome k, and   (a row vector) is the set of explanatory variables associated with observation i.

As a set of independent binary regressions edit

To arrive at the multinomial logit model, one can imagine, for K possible outcomes, running K-1 independent binary logistic regression models, in which one outcome is chosen as a "pivot" and then the other K-1 outcomes are separately regressed against the pivot outcome. If outcome K (the last outcome) is chosen as the pivot, the K-1 regression equations are:

 .

This formulation is also known as the Additive Log Ratio transform commonly used in compositional data analysis. In other applications it’s referred to as “relative risk”.[7]

If we exponentiate both sides and solve for the probabilities, we get:

 

Using the fact that all K of the probabilities must sum to one, we find:

 .

We can use this to find the other probabilities:

 .

The fact that we run multiple regressions reveals why the model relies on the assumption of independence of irrelevant alternatives described above.

Estimating the coefficients edit

The unknown parameters in each vector βk are typically jointly estimated by maximum a posteriori (MAP) estimation, which is an extension of maximum likelihood using regularization of the weights to prevent pathological solutions (usually a squared regularizing function, which is equivalent to placing a zero-mean Gaussian prior distribution on the weights, but other distributions are also possible). The solution is typically found using an iterative procedure such as generalized iterative scaling,[8] iteratively reweighted least squares (IRLS),[9] by means of gradient-based optimization algorithms such as L-BFGS,[4] or by specialized coordinate descent algorithms.[10]

As a log-linear model edit

The formulation of binary logistic regression as a log-linear model can be directly extended to multi-way regression. That is, we model the logarithm of the probability of seeing a given output using the linear predictor as well as an additional normalization factor, the logarithm of the partition function:

 .

As in the binary case, we need an extra term   to ensure that the whole set of probabilities forms a probability distribution, i.e. so that they all sum to one:

 

The reason why we need to add a term to ensure normalization, rather than multiply as is usual, is because we have taken the logarithm of the probabilities. Exponentiating both sides turns the additive term into a multiplicative factor, so that the probability is just the Gibbs measure:

 .

The quantity Z is called the partition function for the distribution. We can compute the value of the partition function by applying the above constraint that requires all probabilities to sum to 1:

 

Therefore:

 

Note that this factor is "constant" in the sense that it is not a function of Yi, which is the variable over which the probability distribution is defined. However, it is definitely not constant with respect to the explanatory variables, or crucially, with respect to the unknown regression coefficients βk, which we will need to determine through some sort of optimization procedure.

The resulting equations for the probabilities are

 .

Or generally:

 

The following function:

 

is referred to as the softmax function. The reason is that the effect of exponentiating the values   is to exaggerate the differences between them. As a result,   will return a value close to 0 whenever   is significantly less than the maximum of all the values, and will return a value close to 1 when applied to the maximum value, unless it is extremely close to the next-largest value. Thus, the softmax function can be used to construct a weighted average that behaves as a smooth function (which can be conveniently differentiated, etc.) and which approximates the indicator function

 

Thus, we can write the probability equations as

 

The softmax function thus serves as the equivalent of the logistic function in binary logistic regression.

Note that not all of the   vectors of coefficients are uniquely identifiable. This is due to the fact that all probabilities must sum to 1, making one of them completely determined once all the rest are known. As a result, there are only   separately specifiable probabilities, and hence   separately identifiable vectors of coefficients. One way to see this is to note that if we add a constant vector to all of the coefficient vectors, the equations are identical:

 

As a result, it is conventional to set   (or alternatively, one of the other coefficient vectors). Essentially, we set the constant so that one of the vectors becomes 0, and all of the other vectors get transformed into the difference between those vectors and the vector we chose. This is equivalent to "pivoting" around one of the K choices, and examining how much better or worse all of the other K-1 choices are, relative to the choice we are pivoting around. Mathematically, we transform the coefficients as follows:

 

This leads to the following equations:

 

Other than the prime symbols on the regression coefficients, this is exactly the same as the form of the model described above, in terms of K-1 independent two-way regressions.

As a latent-variable model edit

It is also possible to formulate multinomial logistic regression as a latent variable model, following the two-way latent variable model described for binary logistic regression. This formulation is common in the theory of discrete choice models, and makes it easier to compare multinomial logistic regression to the related multinomial probit model, as well as to extend it to more complex models.

Imagine that, for each data point i and possible outcome k=1,2,...,K, there is a continuous latent variable Yi,k* (i.e. an unobserved random variable) that is distributed as follows:

 

where   i.e. a standard type-1 extreme value distribution.

This latent variable can be thought of as the utility associated with data point i choosing outcome k, where there is some randomness in the actual amount of utility obtained, which accounts for other unmodeled factors that go into the choice. The value of the actual variable   is then determined in a non-random fashion from these latent variables (i.e. the randomness has been moved from the observed outcomes into the latent variables), where outcome k is chosen if and only if the associated utility (the value of  ) is greater than the utilities of all the other choices, i.e. if the utility associated with outcome k is the maximum of all the utilities. Since the latent variables are continuous, the probability of two having exactly the same value is 0, so we ignore the scenario. That is:

 

Or equivalently:

 

Let's look more closely at the first equation, which we can write as follows:

 

There are a few things to realize here:

  1. In general, if   and   then   That is, the difference of two independent identically distributed extreme-value-distributed variables follows the logistic distribution, where the first parameter is unimportant. This is understandable since the first parameter is a location parameter, i.e. it shifts the mean by a fixed amount, and if two values are both shifted by the same amount, their difference remains the same. This means that all of the relational statements underlying the probability of a given choice involve the logistic distribution, which makes the initial choice of the extreme-value distribution, which seemed rather arbitrary, somewhat more understandable.
  2. The second parameter in an extreme-value or logistic distribution is a scale parameter, such that if   then   This means that the effect of using an error variable with an arbitrary scale parameter in place of scale 1 can be compensated simply by multiplying all regression vectors by the same scale. Together with the previous point, this shows that the use of a standard extreme-value distribution (location 0, scale 1) for the error variables entails no loss of generality over using an arbitrary extreme-value distribution. In fact, the model is nonidentifiable (no single set of optimal coefficients) if the more general distribution is used.
  3. Because only differences of vectors of regression coefficients are used, adding an arbitrary constant to all coefficient vectors has no effect on the model. This means that, just as in the log-linear model, only K-1 of the coefficient vectors are identifiable, and the last one can be set to an arbitrary value (e.g. 0).

Actually finding the values of the above probabilities is somewhat difficult, and is a problem of computing a particular order statistic (the first, i.e. maximum) of a set of values. However, it can be shown that the resulting expressions are the same as in above formulations, i.e. the two are equivalent.

Estimation of intercept edit

When using multinomial logistic regression, one category of the dependent variable is chosen as the reference category. Separate odds ratios are determined for all independent variables for each category of the dependent variable with the exception of the reference category, which is omitted from the analysis. The exponential beta coefficient represents the change in the odds of the dependent variable being in a particular category vis-a-vis the reference category, associated with a one unit change of the corresponding independent variable.


Likelihood function edit

The observed values   for   of the explained variables are considered as realizations of stochastically independent, categorically distributed random variables  .

The likelihood function for this model is defined by:

  where the index   denotes the observations 1 to n and the index   denotes the classes 1 to K.   is the Kronecker delta.

The negative log-likelihood function is therefore the well-known cross-entropy: : 

Application in natural language processing edit

In natural language processing, multinomial LR classifiers are commonly used as an alternative to naive Bayes classifiers because they do not assume statistical independence of the random variables (commonly known as features) that serve as predictors. However, learning in such a model is slower than for a naive Bayes classifier, and thus may not be appropriate given a very large number of classes to learn. In particular, learning in a Naive Bayes classifier is a simple matter of counting up the number of co-occurrences of features and classes, while in a maximum entropy classifier the weights, which are typically maximized using maximum a posteriori (MAP) estimation, must be learned using an iterative procedure; see #Estimating the coefficients.

See also edit

References edit

  1. ^ Greene, William H. (2012). Econometric Analysis (Seventh ed.). Boston: Pearson Education. pp. 803–806. ISBN 978-0-273-75356-8.
  2. ^ Engel, J. (1988). "Polytomous logistic regression". Statistica Neerlandica. 42 (4): 233–252. doi:10.1111/j.1467-9574.1988.tb01238.x.
  3. ^ Menard, Scott (2002). Applied Logistic Regression Analysis. SAGE. p. 91. ISBN 9780761922087.
  4. ^ a b Malouf, Robert (2002). A comparison of algorithms for maximum entropy parameter estimation (PDF). Sixth Conf. on Natural Language Learning (CoNLL). pp. 49–55.
  5. ^ Belsley, David (1991). Conditioning diagnostics : collinearity and weak data in regression. New York: Wiley. ISBN 9780471528890.
  6. ^ Baltas, G.; Doyle, P. (2001). "Random Utility Models in Marketing Research: A Survey". Journal of Business Research. 51 (2): 115–125. doi:10.1016/S0148-2963(99)00058-2.
  7. ^ Stata Manual “mlogit — Multinomial (polytomous) logistic regression”
  8. ^ Darroch, J.N. & Ratcliff, D. (1972). "Generalized iterative scaling for log-linear models". The Annals of Mathematical Statistics. 43 (5): 1470–1480. doi:10.1214/aoms/1177692379.
  9. ^ Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning. Springer. pp. 206–209.
  10. ^ Yu, Hsiang-Fu; Huang, Fang-Lan; Lin, Chih-Jen (2011). "Dual coordinate descent methods for logistic regression and maximum entropy models" (PDF). Machine Learning. 85 (1–2): 41–75. doi:10.1007/s10994-010-5221-8.

multinomial, logistic, regression, multinomial, regression, redirects, here, related, probit, procedure, multinomial, probit, this, article, needs, additional, citations, verification, please, help, improve, this, article, adding, citations, reliable, sources,. Multinomial regression redirects here For the related Probit procedure see Multinomial probit This article needs additional citations for verification Please help improve this article by adding citations to reliable sources Unsourced material may be challenged and removed Find sources Multinomial logistic regression news newspapers books scholar JSTOR November 2011 Learn how and when to remove this template message In statistics multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems i e with more than two possible discrete outcomes 1 That is it is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable given a set of independent variables which may be real valued binary valued categorical valued etc Multinomial logistic regression is known by a variety of other names including polytomous LR 2 3 multiclass LR softmax regression multinomial logit mlogit the maximum entropy MaxEnt classifier and the conditional maximum entropy model 4 Contents 1 Background 2 Assumptions 3 Model 3 1 Introduction 3 2 Setup 3 2 1 Data points 3 2 2 Linear predictor 3 3 As a set of independent binary regressions 3 4 Estimating the coefficients 3 5 As a log linear model 3 6 As a latent variable model 4 Estimation of intercept 5 Likelihood function 6 Application in natural language processing 7 See also 8 ReferencesBackground editMultinomial logistic regression is used when the dependent variable in question is nominal equivalently categorical meaning that it falls into any one of a set of categories that cannot be ordered in any meaningful way and for which there are more than two categories Some examples would be Which major will a college student choose given their grades stated likes and dislikes etc Which blood type does a person have given the results of various diagnostic tests In a hands free mobile phone dialing application which person s name was spoken given various properties of the speech signal Which candidate will a person vote for given particular demographic characteristics Which country will a firm locate an office in given the characteristics of the firm and of the various candidate countries These are all statistical classification problems They all have in common a dependent variable to be predicted that comes from one of a limited set of items that cannot be meaningfully ordered as well as a set of independent variables also known as features explanators etc which are used to predict the dependent variable Multinomial logistic regression is a particular solution to classification problems that use a linear combination of the observed features and some problem specific parameters to estimate the probability of each particular value of the dependent variable The best values of the parameters for a given problem are usually determined from some training data e g some people for whom both the diagnostic test results and blood types are known or some examples of known words being spoken Assumptions editThe multinomial logistic model assumes that data are case specific that is each independent variable has a single value for each case As with other types of regression there is no need for the independent variables to be statistically independent from each other unlike for example in a naive Bayes classifier however collinearity is assumed to be relatively low as it becomes difficult to differentiate between the impact of several variables if this is not the case 5 If the multinomial logit is used to model choices it relies on the assumption of independence of irrelevant alternatives IIA which is not always desirable This assumption states that the odds of preferring one class over another do not depend on the presence or absence of other irrelevant alternatives For example the relative probabilities of taking a car or bus to work do not change if a bicycle is added as an additional possibility This allows the choice of K alternatives to be modeled as a set of K 1 independent binary choices in which one alternative is chosen as a pivot and the other K 1 compared against it one at a time The IIA hypothesis is a core hypothesis in rational choice theory however numerous studies in psychology show that individuals often violate this assumption when making choices An example of a problem case arises if choices include a car and a blue bus Suppose the odds ratio between the two is 1 1 Now if the option of a red bus is introduced a person may be indifferent between a red and a blue bus and hence may exhibit a car blue bus red bus odds ratio of 1 0 5 0 5 thus maintaining a 1 1 ratio of car any bus while adopting a changed car blue bus ratio of 1 0 5 Here the red bus option was not in fact irrelevant because a red bus was a perfect substitute for a blue bus If the multinomial logit is used to model choices it may in some situations impose too much constraint on the relative preferences between the different alternatives It is especially important to take into account if the analysis aims to predict how choices would change if one alternative were to disappear for instance if one political candidate withdraws from a three candidate race Other models like the nested logit or the multinomial probit may be used in such cases as they allow for violation of the IIA 6 Model editSee also Logistic regression Introduction edit There are multiple equivalent ways to describe the mathematical model underlying multinomial logistic regression This can make it difficult to compare different treatments of the subject in different texts The article on logistic regression presents a number of equivalent formulations of simple logistic regression and many of these have analogues in the multinomial logit model The idea behind all of them as in many other statistical classification techniques is to construct a linear predictor function that constructs a score from a set of weights that are linearly combined with the explanatory variables features of a given observation using a dot product score X i k b k X i displaystyle operatorname score mathbf X i k boldsymbol beta k cdot mathbf X i nbsp where Xi is the vector of explanatory variables describing observation i bk is a vector of weights or regression coefficients corresponding to outcome k and score Xi k is the score associated with assigning observation i to category k In discrete choice theory where observations represent people and outcomes represent choices the score is considered the utility associated with person i choosing outcome k The predicted outcome is the one with the highest score The difference between the multinomial logit model and numerous other methods models algorithms etc with the same basic setup the perceptron algorithm support vector machines linear discriminant analysis etc is the procedure for determining training the optimal weights coefficients and the way that the score is interpreted In particular in the multinomial logit model the score can directly be converted to a probability value indicating the probability of observation i choosing outcome k given the measured characteristics of the observation This provides a principled way of incorporating the prediction of a particular multinomial logit model into a larger procedure that may involve multiple such predictions each with a possibility of error Without such means of combining predictions errors tend to multiply For example imagine a large predictive model that is broken down into a series of submodels where the prediction of a given submodel is used as the input of another submodel and that prediction is in turn used as the input into a third submodel etc If each submodel has 90 accuracy in its predictions and there are five submodels in series then the overall model has only 0 95 59 accuracy If each submodel has 80 accuracy then overall accuracy drops to 0 85 33 accuracy This issue is known as error propagation and is a serious problem in real world predictive models which are usually composed of numerous parts Predicting probabilities of each possible outcome rather than simply making a single optimal prediction is one means of alleviating this issue citation needed Setup edit The basic setup is the same as in logistic regression the only difference being that the dependent variables are categorical rather than binary i e there are K possible outcomes rather than just two The following description is somewhat shortened for more details consult the logistic regression article Data points edit Specifically it is assumed that we have a series of N observed data points Each data point i ranging from 1 to N consists of a set of M explanatory variables x1 i xM i also known as independent variables predictor variables features etc and an associated categorical outcome Yi also known as dependent variable response variable which can take on one of K possible values These possible values represent logically separate categories e g different political parties blood types etc and are often described mathematically by arbitrarily assigning each a number from 1 to K The explanatory variables and outcome represent observed properties of the data points and are often thought of as originating in the observations of N experiments although an experiment may consist in nothing more than gathering data The goal of multinomial logistic regression is to construct a model that explains the relationship between the explanatory variables and the outcome so that the outcome of a new experiment can be correctly predicted for a new data point for which the explanatory variables but not the outcome are available In the process the model attempts to explain the relative effect of differing explanatory variables on the outcome Some examples The observed outcomes are different variants of a disease such as hepatitis possibly including no disease and or other related diseases in a set of patients and the explanatory variables might be characteristics of the patients thought to be pertinent sex race age blood pressure outcomes of various liver function tests etc The goal is then to predict which disease is causing the observed liver related symptoms in a new patient The observed outcomes are the party chosen by a set of people in an election and the explanatory variables are the demographic characteristics of each person e g sex race age income etc The goal is then to predict the likely vote of a new voter with given characteristics Linear predictor edit As in other forms of linear regression multinomial logistic regression uses a linear predictor function f k i displaystyle f k i nbsp to predict the probability that observation i has outcome k of the following form f k i b 0 k b 1 k x 1 i b 2 k x 2 i b M k x M i displaystyle f k i beta 0 k beta 1 k x 1 i beta 2 k x 2 i cdots beta M k x M i nbsp where b m k displaystyle beta m k nbsp is a regression coefficient associated with the mth explanatory variable and the kth outcome As explained in the logistic regression article the regression coefficients and explanatory variables are normally grouped into vectors of size M 1 so that the predictor function can be written more compactly f k i b k x i displaystyle f k i boldsymbol beta k cdot mathbf x i nbsp where b k displaystyle boldsymbol beta k nbsp is the set of regression coefficients associated with outcome k and x i displaystyle mathbf x i nbsp a row vector is the set of explanatory variables associated with observation i As a set of independent binary regressions edit To arrive at the multinomial logit model one can imagine for K possible outcomes running K 1 independent binary logistic regression models in which one outcome is chosen as a pivot and then the other K 1 outcomes are separately regressed against the pivot outcome If outcome K the last outcome is chosen as the pivot the K 1 regression equations are ln Pr Y i k Pr Y i K b k X i k lt K displaystyle ln frac Pr Y i k Pr Y i K boldsymbol beta k cdot mathbf X i k lt K nbsp This formulation is also known as the Additive Log Ratio transform commonly used in compositional data analysis In other applications it s referred to as relative risk 7 If we exponentiate both sides and solve for the probabilities we get Pr Y i k Pr Y i K e b k X i k lt K displaystyle Pr Y i k Pr Y i K e boldsymbol beta k cdot mathbf X i k lt K nbsp Using the fact that all K of the probabilities must sum to one we find Pr Y i K 1 j 1 K 1 Pr Y i j 1 j 1 K 1 Pr Y i K e b j X i Pr Y i K 1 1 j 1 K 1 e b j X i displaystyle Pr Y i K 1 sum j 1 K 1 Pr Y i j 1 sum j 1 K 1 Pr Y i K e boldsymbol beta j cdot mathbf X i Rightarrow Pr Y i K frac 1 1 sum j 1 K 1 e boldsymbol beta j cdot mathbf X i nbsp We can use this to find the other probabilities Pr Y i k e b k X i 1 j 1 K 1 e b j X i k lt K displaystyle Pr Y i k frac e boldsymbol beta k cdot mathbf X i 1 sum j 1 K 1 e boldsymbol beta j cdot mathbf X i k lt K nbsp The fact that we run multiple regressions reveals why the model relies on the assumption of independence of irrelevant alternatives described above Estimating the coefficients edit The unknown parameters in each vector bk are typically jointly estimated by maximum a posteriori MAP estimation which is an extension of maximum likelihood using regularization of the weights to prevent pathological solutions usually a squared regularizing function which is equivalent to placing a zero mean Gaussian prior distribution on the weights but other distributions are also possible The solution is typically found using an iterative procedure such as generalized iterative scaling 8 iteratively reweighted least squares IRLS 9 by means of gradient based optimization algorithms such as L BFGS 4 or by specialized coordinate descent algorithms 10 As a log linear model edit The formulation of binary logistic regression as a log linear model can be directly extended to multi way regression That is we model the logarithm of the probability of seeing a given output using the linear predictor as well as an additional normalization factor the logarithm of the partition function ln Pr Y i k b k X i ln Z k K displaystyle ln Pr Y i k boldsymbol beta k cdot mathbf X i ln Z k leq K nbsp As in the binary case we need an extra term ln Z displaystyle ln Z nbsp to ensure that the whole set of probabilities forms a probability distribution i e so that they all sum to one k 1 K Pr Y i k 1 displaystyle sum k 1 K Pr Y i k 1 nbsp The reason why we need to add a term to ensure normalization rather than multiply as is usual is because we have taken the logarithm of the probabilities Exponentiating both sides turns the additive term into a multiplicative factor so that the probability is just the Gibbs measure Pr Y i k 1 Z e b k X i k K displaystyle Pr Y i k frac 1 Z e boldsymbol beta k cdot mathbf X i k leq K nbsp The quantity Z is called the partition function for the distribution We can compute the value of the partition function by applying the above constraint that requires all probabilities to sum to 1 1 k 1 K Pr Y i k k 1 K 1 Z e b k X i 1 Z k 1 K e b k X i displaystyle 1 sum k 1 K Pr Y i k sum k 1 K frac 1 Z e boldsymbol beta k cdot mathbf X i frac 1 Z sum k 1 K e boldsymbol beta k cdot mathbf X i nbsp Therefore Z k 1 K e b k X i displaystyle Z sum k 1 K e boldsymbol beta k cdot mathbf X i nbsp Note that this factor is constant in the sense that it is not a function of Yi which is the variable over which the probability distribution is defined However it is definitely not constant with respect to the explanatory variables or crucially with respect to the unknown regression coefficients bk which we will need to determine through some sort of optimization procedure The resulting equations for the probabilities are Pr Y i k e b k X i j 1 K e b j X i k K displaystyle Pr Y i k frac e boldsymbol beta k cdot mathbf X i sum j 1 K e boldsymbol beta j cdot mathbf X i k leq K nbsp Or generally Pr Y i c e b c X i j 1 K e b j X i displaystyle Pr Y i c frac e boldsymbol beta c cdot mathbf X i sum j 1 K e boldsymbol beta j cdot mathbf X i nbsp The following function softmax k x 1 x n e x k i 1 n e x i displaystyle operatorname softmax k x 1 ldots x n frac e x k sum i 1 n e x i nbsp is referred to as the softmax function The reason is that the effect of exponentiating the values x 1 x n displaystyle x 1 ldots x n nbsp is to exaggerate the differences between them As a result softmax k x 1 x n displaystyle operatorname softmax k x 1 ldots x n nbsp will return a value close to 0 whenever x k displaystyle x k nbsp is significantly less than the maximum of all the values and will return a value close to 1 when applied to the maximum value unless it is extremely close to the next largest value Thus the softmax function can be used to construct a weighted average that behaves as a smooth function which can be conveniently differentiated etc and which approximates the indicator function f k 1 if k arg max x 1 x n 0 otherwise displaystyle f k begin cases 1 textrm if k operatorname arg max x 1 ldots x n 0 textrm otherwise end cases nbsp Thus we can write the probability equations as Pr Y i c softmax c b 1 X i b K X i displaystyle Pr Y i c operatorname softmax c boldsymbol beta 1 cdot mathbf X i ldots boldsymbol beta K cdot mathbf X i nbsp The softmax function thus serves as the equivalent of the logistic function in binary logistic regression Note that not all of the b k displaystyle beta k nbsp vectors of coefficients are uniquely identifiable This is due to the fact that all probabilities must sum to 1 making one of them completely determined once all the rest are known As a result there are only k 1 displaystyle k 1 nbsp separately specifiable probabilities and hence k 1 displaystyle k 1 nbsp separately identifiable vectors of coefficients One way to see this is to note that if we add a constant vector to all of the coefficient vectors the equations are identical e b c C X i k 1 K e b k C X i e b c X i e C X i k 1 K e b k X i e C X i e C X i e b c X i e C X i k 1 K e b k X i e b c X i k 1 K e b k X i displaystyle begin aligned frac e boldsymbol beta c C cdot mathbf X i sum k 1 K e boldsymbol beta k C cdot mathbf X i amp frac e boldsymbol beta c cdot mathbf X i e C cdot mathbf X i sum k 1 K e boldsymbol beta k cdot mathbf X i e C cdot mathbf X i amp frac e C cdot mathbf X i e boldsymbol beta c cdot mathbf X i e C cdot mathbf X i sum k 1 K e boldsymbol beta k cdot mathbf X i amp frac e boldsymbol beta c cdot mathbf X i sum k 1 K e boldsymbol beta k cdot mathbf X i end aligned nbsp As a result it is conventional to set C b K displaystyle C boldsymbol beta K nbsp or alternatively one of the other coefficient vectors Essentially we set the constant so that one of the vectors becomes 0 and all of the other vectors get transformed into the difference between those vectors and the vector we chose This is equivalent to pivoting around one of the K choices and examining how much better or worse all of the other K 1 choices are relative to the choice we are pivoting around Mathematically we transform the coefficients as follows b k b k b K k lt K b K 0 displaystyle begin aligned boldsymbol beta k amp boldsymbol beta k boldsymbol beta K k lt K boldsymbol beta K amp 0 end aligned nbsp This leads to the following equations Pr Y i k e b k X i 1 j 1 K 1 e b j X i k K displaystyle Pr Y i k frac e boldsymbol beta k cdot mathbf X i 1 sum j 1 K 1 e boldsymbol beta j cdot mathbf X i k leq K nbsp Other than the prime symbols on the regression coefficients this is exactly the same as the form of the model described above in terms of K 1 independent two way regressions As a latent variable model edit It is also possible to formulate multinomial logistic regression as a latent variable model following the two way latent variable model described for binary logistic regression This formulation is common in the theory of discrete choice models and makes it easier to compare multinomial logistic regression to the related multinomial probit model as well as to extend it to more complex models Imagine that for each data point i and possible outcome k 1 2 K there is a continuous latent variable Yi k i e an unobserved random variable that is distributed as follows Y i k b k X i e k k K displaystyle Y i k ast boldsymbol beta k cdot mathbf X i varepsilon k k leq K nbsp where e k EV 1 0 1 displaystyle varepsilon k sim operatorname EV 1 0 1 nbsp i e a standard type 1 extreme value distribution This latent variable can be thought of as the utility associated with data point i choosing outcome k where there is some randomness in the actual amount of utility obtained which accounts for other unmodeled factors that go into the choice The value of the actual variable Y i displaystyle Y i nbsp is then determined in a non random fashion from these latent variables i e the randomness has been moved from the observed outcomes into the latent variables where outcome k is chosen if and only if the associated utility the value of Y i k displaystyle Y i k ast nbsp is greater than the utilities of all the other choices i e if the utility associated with outcome k is the maximum of all the utilities Since the latent variables are continuous the probability of two having exactly the same value is 0 so we ignore the scenario That is Pr Y i 1 Pr Y i 1 gt Y i 2 and Y i 1 gt Y i 3 and and Y i 1 gt Y i K Pr Y i 2 Pr Y i 2 gt Y i 1 and Y i 2 gt Y i 3 and and Y i 2 gt Y i K Pr Y i K Pr Y i K gt Y i 1 and Y i K gt Y i 2 and and Y i K gt Y i K 1 displaystyle begin aligned Pr Y i 1 amp Pr Y i 1 ast gt Y i 2 ast text and Y i 1 ast gt Y i 3 ast text and cdots text and Y i 1 ast gt Y i K ast Pr Y i 2 amp Pr Y i 2 ast gt Y i 1 ast text and Y i 2 ast gt Y i 3 ast text and cdots text and Y i 2 ast gt Y i K ast cdots amp Pr Y i K amp Pr Y i K ast gt Y i 1 ast text and Y i K ast gt Y i 2 ast text and cdots text and Y i K ast gt Y i K 1 ast end aligned nbsp Or equivalently Pr Y i k Pr max Y i 1 Y i 2 Y i K Y i k k K displaystyle Pr Y i k Pr max Y i 1 ast Y i 2 ast ldots Y i K ast Y i k ast k leq K nbsp Let s look more closely at the first equation which we can write as follows Pr Y i 1 Pr Y i 1 gt Y i k k 2 K Pr Y i 1 Y i k gt 0 k 2 K Pr b 1 X i e 1 b k X i e k gt 0 k 2 K Pr b 1 b k X i gt e k e 1 k 2 K displaystyle begin aligned Pr Y i 1 amp Pr Y i 1 ast gt Y i k ast forall k 2 ldots K amp Pr Y i 1 ast Y i k ast gt 0 forall k 2 ldots K amp Pr boldsymbol beta 1 cdot mathbf X i varepsilon 1 boldsymbol beta k cdot mathbf X i varepsilon k gt 0 forall k 2 ldots K amp Pr boldsymbol beta 1 boldsymbol beta k cdot mathbf X i gt varepsilon k varepsilon 1 forall k 2 ldots K end aligned nbsp There are a few things to realize here In general if X EV 1 a b displaystyle X sim operatorname EV 1 a b nbsp and Y EV 1 a b displaystyle Y sim operatorname EV 1 a b nbsp then X Y Logistic 0 b displaystyle X Y sim operatorname Logistic 0 b nbsp That is the difference of two independent identically distributed extreme value distributed variables follows the logistic distribution where the first parameter is unimportant This is understandable since the first parameter is a location parameter i e it shifts the mean by a fixed amount and if two values are both shifted by the same amount their difference remains the same This means that all of the relational statements underlying the probability of a given choice involve the logistic distribution which makes the initial choice of the extreme value distribution which seemed rather arbitrary somewhat more understandable The second parameter in an extreme value or logistic distribution is a scale parameter such that if X Logistic 0 1 displaystyle X sim operatorname Logistic 0 1 nbsp then b X Logistic 0 b displaystyle bX sim operatorname Logistic 0 b nbsp This means that the effect of using an error variable with an arbitrary scale parameter in place of scale 1 can be compensated simply by multiplying all regression vectors by the same scale Together with the previous point this shows that the use of a standard extreme value distribution location 0 scale 1 for the error variables entails no loss of generality over using an arbitrary extreme value distribution In fact the model is nonidentifiable no single set of optimal coefficients if the more general distribution is used Because only differences of vectors of regression coefficients are used adding an arbitrary constant to all coefficient vectors has no effect on the model This means that just as in the log linear model only K 1 of the coefficient vectors are identifiable and the last one can be set to an arbitrary value e g 0 Actually finding the values of the above probabilities is somewhat difficult and is a problem of computing a particular order statistic the first i e maximum of a set of values However it can be shown that the resulting expressions are the same as in above formulations i e the two are equivalent Estimation of intercept editWhen using multinomial logistic regression one category of the dependent variable is chosen as the reference category Separate odds ratios are determined for all independent variables for each category of the dependent variable with the exception of the reference category which is omitted from the analysis The exponential beta coefficient represents the change in the odds of the dependent variable being in a particular category vis a vis the reference category associated with a one unit change of the corresponding independent variable Likelihood function editThe observed values y i 0 1 K displaystyle y i in 0 1 dots K nbsp for i 1 n displaystyle i 1 dots n nbsp of the explained variables are considered as realizations of stochastically independent categorically distributed random variables Y 1 Y n displaystyle Y 1 dots Y n nbsp The likelihood function for this model is defined by L i 1 n P Y i y i i 1 n j 1 K P Y i j d j y i displaystyle L prod i 1 n P Y i y i prod i 1 n left prod j 1 K P Y i j delta j y i right nbsp where the index i displaystyle i nbsp denotes the observations 1 to n and the index j displaystyle j nbsp denotes the classes 1 to K d j y i 1 for j y i 0 otherwise displaystyle delta j y i begin cases 1 text for j y i 0 text otherwise end cases nbsp is the Kronecker delta The negative log likelihood function is therefore the well known cross entropy log L i 1 n j 1 K d j y i log P Y i j displaystyle log L sum i 1 n sum j 1 K delta j y i log P Y i j nbsp Application in natural language processing editIn natural language processing multinomial LR classifiers are commonly used as an alternative to naive Bayes classifiers because they do not assume statistical independence of the random variables commonly known as features that serve as predictors However learning in such a model is slower than for a naive Bayes classifier and thus may not be appropriate given a very large number of classes to learn In particular learning in a Naive Bayes classifier is a simple matter of counting up the number of co occurrences of features and classes while in a maximum entropy classifier the weights which are typically maximized using maximum a posteriori MAP estimation must be learned using an iterative procedure see Estimating the coefficients See also editLogistic regression Multinomial probitReferences edit Greene William H 2012 Econometric Analysis Seventh ed Boston Pearson Education pp 803 806 ISBN 978 0 273 75356 8 Engel J 1988 Polytomous logistic regression Statistica Neerlandica 42 4 233 252 doi 10 1111 j 1467 9574 1988 tb01238 x Menard Scott 2002 Applied Logistic Regression Analysis SAGE p 91 ISBN 9780761922087 a b Malouf Robert 2002 A comparison of algorithms for maximum entropy parameter estimation PDF Sixth Conf on Natural Language Learning CoNLL pp 49 55 Belsley David 1991 Conditioning diagnostics collinearity and weak data in regression New York Wiley ISBN 9780471528890 Baltas G Doyle P 2001 Random Utility Models in Marketing Research A Survey Journal of Business Research 51 2 115 125 doi 10 1016 S0148 2963 99 00058 2 Stata Manual mlogit Multinomial polytomous logistic regression Darroch J N amp Ratcliff D 1972 Generalized iterative scaling for log linear models The Annals of Mathematical Statistics 43 5 1470 1480 doi 10 1214 aoms 1177692379 Bishop Christopher M 2006 Pattern Recognition and Machine Learning Springer pp 206 209 Yu Hsiang Fu Huang Fang Lan Lin Chih Jen 2011 Dual coordinate descent methods for logistic regression and maximum entropy models PDF Machine Learning 85 1 2 41 75 doi 10 1007 s10994 010 5221 8 Retrieved from https en wikipedia org w index php title Multinomial logistic regression amp oldid 1213929849, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.