fbpx
Wikipedia

Linear regression

In statistics, linear regression is a statistical model which estimates the linear relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression.[1] This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.[2] If the explanatory variables are measured with error then errors-in-variables models are required, also known as measurement error models.

In linear regression, the relationships are modeled using linear predictor functions whose unknown model parameters are estimated from the data. Such models are called linear models.[3] Most commonly, the conditional mean of the response given the values of the explanatory variables (or predictors) is assumed to be an affine function of those values; less commonly, the conditional median or some other quantile is used. Like all forms of regression analysis, linear regression focuses on the conditional probability distribution of the response given the values of the predictors, rather than on the joint probability distribution of all of these variables, which is the domain of multivariate analysis.

Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications.[4] This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine.

Linear regression has many practical uses. Most applications fall into one of the following two broad categories:

  • If the goal is error i.e variance reduction in prediction or forecasting, linear regression can be used to fit a predictive model to an observed data set of values of the response and explanatory variables. After developing such a model, if additional values of the explanatory variables are collected without an accompanying response value, the fitted model can be used to make a prediction of the response.
  • If the goal is to explain variation in the response variable that can be attributed to variation in the explanatory variables, linear regression analysis can be applied to quantify the strength of the relationship between the response and the explanatory variables, and in particular to determine whether some explanatory variables may have no linear relationship with the response at all, or to identify which subsets of explanatory variables may contain redundant information about the response.

Linear regression models are often fitted using the least squares approach, but they may also be fitted in other ways, such as by minimizing the "lack of fit" in some other norm (as with least absolute deviations regression), or by minimizing a penalized version of the least squares cost function as in ridge regression (L2-norm penalty) and lasso (L1-norm penalty). Use of the Mean Squared Error(MSE) as the cost on a dataset that has many large outliers, can result in a model that fits the outliers more than the true data due to the higher importance assigned by MSE to large errors. So, cost functions that are robust to outliers should be used if the dataset has many large outliers. Conversely, the least squares approach can be used to fit models that are not linear models. Thus, although the terms "least squares" and "linear model" are closely linked, they are not synonymous.

Formulation edit

 
In linear regression, the observations (red) are assumed to be the result of random deviations (green) from an underlying relationship (blue) between a dependent variable (y) and an independent variable (x).

Given a data set   of n statistical units, a linear regression model assumes that the relationship between the dependent variable y and the vector of regressors x is linear. This relationship is modeled through a disturbance term or error variable ε — an unobserved random variable that adds "noise" to the linear relationship between the dependent variable and regressors. Thus the model takes the form

 
where T denotes the transpose, so that xiTβ is the inner product between vectors xi and β.

Often these n equations are stacked together and written in matrix notation as

 

where

 
 
 

Notation and terminology edit

  •   is a vector of observed values   of the variable called the regressand, endogenous variable, response variable, target variable, measured variable, criterion variable, or dependent variable. This variable is also sometimes known as the predicted variable, but this should not be confused with predicted values, which are denoted  . The decision as to which variable in a data set is modeled as the dependent variable and which are modeled as the independent variables may be based on a presumption that the value of one of the variables is caused by, or directly influenced by the other variables. Alternatively, there may be an operational reason to model one of the variables in terms of the others, in which case there need be no presumption of causality.
  •   may be seen as a matrix of row-vectors   or of n-dimensional column-vectors  , which are known as regressors, exogenous variables, explanatory variables, covariates, input variables, predictor variables, or independent variables (not to be confused with the concept of independent random variables). The matrix   is sometimes called the design matrix.
    • Usually a constant is included as one of the regressors. In particular,   for  . The corresponding element of β is called the intercept. Many statistical inference procedures for linear models require an intercept to be present, so it is often included even if theoretical considerations suggest that its value should be zero.
    • Sometimes one of the regressors can be a non-linear function of another regressor or of the data values, as in polynomial regression and segmented regression. The model remains linear as long as it is linear in the parameter vector β.
    • The values xij may be viewed as either observed values of random variables Xj or as fixed values chosen prior to observing the dependent variable. Both interpretations may be appropriate in different cases, and they generally lead to the same estimation procedures; however different approaches to asymptotic analysis are used in these two situations.
  •   is a  -dimensional parameter vector, where   is the intercept term (if one is included in the model—otherwise   is p-dimensional). Its elements are known as effects or regression coefficients (although the latter term is sometimes reserved for the estimated effects). In simple linear regression, p=1, and the coefficient is known as regression slope. Statistical estimation and inference in linear regression focuses on β. The elements of this parameter vector are interpreted as the partial derivatives of the dependent variable with respect to the various independent variables.
  •   is a vector of values  . This part of the model is called the error term, disturbance term, or sometimes noise (in contrast with the "signal" provided by the rest of the model). This variable captures all other factors which influence the dependent variable y other than the regressors x. The relationship between the error term and the regressors, for example their correlation, is a crucial consideration in formulating a linear regression model, as it will determine the appropriate estimation method.

Fitting a linear model to a given data set usually requires estimating the regression coefficients   such that the error term   is minimized. For example, it is common to use the sum of squared errors   as a measure of   for minimization.

Example edit

Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascent hi at various moments in time ti. Physics tells us that, ignoring the drag, the relationship can be modeled as

 

where β1 determines the initial velocity of the ball, β2 is proportional to the standard gravity, and εi is due to measurement errors. Linear regression can be used to estimate the values of β1 and β2 from the measured data. This model is non-linear in the time variable, but it is linear in the parameters β1 and β2; if we take regressors xi = (xi1, xi2)  = (ti, ti2), the model takes on the standard form

 

Assumptions edit

Standard linear regression models with standard estimation techniques make a number of assumptions about the predictor variables, the response variables and their relationship. Numerous extensions have been developed that allow each of these assumptions to be relaxed (i.e. reduced to a weaker form), and in some cases eliminated entirely. Generally these extensions make the estimation procedure more complex and time-consuming, and may also require more data in order to produce an equally precise model.[citation needed]

 
Example of a cubic polynomial regression, which is a type of linear regression. Although polynomial regression fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(y | x) is linear in the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.

The following are the major assumptions made by standard linear regression models with standard estimation techniques (e.g. ordinary least squares):

  • Weak exogeneity. This essentially means that the predictor variables x can be treated as fixed values, rather than random variables. This means, for example, that the predictor variables are assumed to be error-free—that is, not contaminated with measurement errors. Although this assumption is not realistic in many settings, dropping it leads to significantly more difficult errors-in-variables models.
  • Linearity. This means that the mean of the response variable is a linear combination of the parameters (regression coefficients) and the predictor variables. Note that this assumption is much less restrictive than it may at first seem. Because the predictor variables are treated as fixed values (see above), linearity is really only a restriction on the parameters. The predictor variables themselves can be arbitrarily transformed, and in fact multiple copies of the same underlying predictor variable can be added, each one transformed differently. This technique is used, for example, in polynomial regression, which uses linear regression to fit the response variable as an arbitrary polynomial function (up to a given degree) of a predictor variable. With this much flexibility, models such as polynomial regression often have "too much power", in that they tend to overfit the data. As a result, some kind of regularization must typically be used to prevent unreasonable solutions coming out of the estimation process. Common examples are ridge regression and lasso regression. Bayesian linear regression can also be used, which by its nature is more or less immune to the problem of overfitting. (In fact, ridge regression and lasso regression can both be viewed as special cases of Bayesian linear regression, with particular types of prior distributions placed on the regression coefficients.)
  •  
    Visualization of heteroscedasticity in a scatter plot against 100 random fitted values using Matlab
    Constant variance (a.k.a. homoscedasticity). This means that the variance of the errors does not depend on the values of the predictor variables. Thus the variability of the responses for given fixed values of the predictors is the same regardless of how large or small the responses are. This is often not the case, as a variable whose mean is large will typically have a greater variance than one whose mean is small. For example, a person whose income is predicted to be $100,000 may easily have an actual income of $80,000 or $120,000—i.e., a standard deviation of around $20,000—while another person with a predicted income of $10,000 is unlikely to have the same $20,000 standard deviation, since that would imply their actual income could vary anywhere between −$10,000 and $30,000. (In fact, as this shows, in many cases—often the same cases where the assumption of normally distributed errors fails—the variance or standard deviation should be predicted to be proportional to the mean, rather than constant.) The absence of homoscedasticity is called heteroscedasticity. In order to check this assumption, a plot of residuals versus predicted values (or the values of each individual predictor) can be examined for a "fanning effect" (i.e., increasing or decreasing vertical spread as one moves left to right on the plot). A plot of the absolute or squared residuals versus the predicted values (or each predictor) can also be examined for a trend or curvature. Formal tests can also be used; see Heteroscedasticity. The presence of heteroscedasticity will result in an overall "average" estimate of variance being used instead of one that takes into account the true variance structure. This leads to less precise (but in the case of ordinary least squares, not biased) parameter estimates and biased standard errors, resulting in misleading tests and interval estimates. The mean squared error for the model will also be wrong. Various estimation techniques including weighted least squares and the use of heteroscedasticity-consistent standard errors can handle heteroscedasticity in a quite general way. Bayesian linear regression techniques can also be used when the variance is assumed to be a function of the mean. It is also possible in some cases to fix the problem by applying a transformation to the response variable (e.g., fitting the logarithm of the response variable using a linear regression model, which implies that the response variable itself has a log-normal distribution rather than a normal distribution).
 
To check for violations of the assumptions of linearity, constant variance, and independence of errors within a linear regression model, the residuals are typically plotted against the predicted values (or each of the individual predictors). An apparently random scatter of points about the horizontal midline at 0 is ideal, but cannot rule out certain kinds of violations such as autocorrelation in the errors or their correlation with one or more covariates.
  • Independence of errors. This assumes that the errors of the response variables are uncorrelated with each other. (Actual statistical independence is a stronger condition than mere lack of correlation and is often not needed, although it can be exploited if it is known to hold.) Some methods such as generalized least squares are capable of handling correlated errors, although they typically require significantly more data unless some sort of regularization is used to bias the model towards assuming uncorrelated errors. Bayesian linear regression is a general way of handling this issue.
  • Lack of perfect multicollinearity in the predictors. For standard least squares estimation methods, the design matrix X must have full column rank p; otherwise perfect multicollinearity exists in the predictor variables, meaning a linear relationship exists between two or more predictor variables. This can be caused by accidentally duplicating a variable in the data, using a linear transformation of a variable along with the original (e.g., the same temperature measurements expressed in Fahrenheit and Celsius), or including a linear combination of multiple variables in the model, such as their mean. It can also happen if there is too little data available compared to the number of parameters to be estimated (e.g., fewer data points than regression coefficients). Near violations of this assumption, where predictors are highly but not perfectly correlated, can reduce the precision of parameter estimates (see Variance inflation factor). In the case of perfect multicollinearity, the parameter vector β will be non-identifiable—it has no unique solution. In such a case, only some of the parameters can be identified (i.e., their values can only be estimated within some linear subspace of the full parameter space Rp). See partial least squares regression. Methods for fitting linear models with multicollinearity have been developed,[5][6][7][8] some of which require additional assumptions such as "effect sparsity"—that a large fraction of the effects are exactly zero. Note that the more computationally expensive iterated algorithms for parameter estimation, such as those used in generalized linear models, do not suffer from this problem.
  • Assumption of Zero Mean of Residuals: In regression analysis, another critical assumption is that the mean of the residuals is zero or close to zero. This assumption is fundamental for the validity of any conclusions drawn from the least squares estimates of the parameters. Residuals are the differences between the observed values and the values predicted by the model. If the mean of these residuals is not zero, it implies that the model consistently overestimates or underestimates the observed values, indicating a potential bias in the model estimation. Ensuring that the mean of the residuals is zero allows the model to be considered unbiased in terms of its error, which is crucial for the accurate interpretation of the regression coefficients.

Violations of these assumptions can result in biased estimations of β, biased standard errors, untrustworthy confidence intervals and significance tests.[9] Beyond these assumptions, several other statistical properties of the data strongly influence the performance of different estimation methods:

  • The statistical relationship between the error terms and the regressors plays an important role in determining whether an estimation procedure has desirable sampling properties such as being unbiased and consistent.
  • The arrangement, or probability distribution of the predictor variables x has a major influence on the precision of estimates of β. Sampling and design of experiments are highly developed subfields of statistics that provide guidance for collecting data in such a way to achieve a precise estimate of β.

For an example on how to test these assumptions in practical scenarios, see this comprehensive guide on Kaggle: Regression Basics with Assumption Testing.

Interpretation edit

 
The data sets in the Anscombe's quartet are designed to have approximately the same linear regression line (as well as nearly identical means, standard deviations, and correlations) but are graphically very different. This illustrates the pitfalls of relying solely on a fitted model to understand the relationship between variables.

A fitted linear regression model can be used to identify the relationship between a single predictor variable xj and the response variable y when all the other predictor variables in the model are "held fixed". Specifically, the interpretation of βj is the expected change in y for a one-unit change in xj when the other covariates are held fixed—that is, the expected value of the partial derivative of y with respect to xj. This is sometimes called the unique effect of xj on y. In contrast, the marginal effect of xj on y can be assessed using a correlation coefficient or simple linear regression model relating only xj to y; this effect is the total derivative of y with respect to xj.

Care must be taken when interpreting regression results, as some of the regressors may not allow for marginal changes (such as dummy variables, or the intercept term), while others cannot be held fixed (recall the example from the introduction: it would be impossible to "hold ti fixed" and at the same time change the value of ti2).

It is possible that the unique effect can be nearly zero even when the marginal effect is large. This may imply that some other covariate captures all the information in xj, so that once that variable is in the model, there is no contribution of xj to the variation in y. Conversely, the unique effect of xj can be large while its marginal effect is nearly zero. This would happen if the other covariates explained a great deal of the variation of y, but they mainly explain variation in a way that is complementary to what is captured by xj. In this case, including the other variables in the model reduces the part of the variability of y that is unrelated to xj, thereby strengthening the apparent relationship with xj.

The meaning of the expression "held fixed" may depend on how the values of the predictor variables arise. If the experimenter directly sets the values of the predictor variables according to a study design, the comparisons of interest may literally correspond to comparisons among units whose predictor variables have been "held fixed" by the experimenter. Alternatively, the expression "held fixed" can refer to a selection that takes place in the context of data analysis. In this case, we "hold a variable fixed" by restricting our attention to the subsets of the data that happen to have a common value for the given predictor variable. This is the only interpretation of "held fixed" that can be used in an observational study.

The notion of a "unique effect" is appealing when studying a complex system where multiple interrelated components influence the response variable. In some cases, it can literally be interpreted as the causal effect of an intervention that is linked to the value of a predictor variable. However, it has been argued that in many cases multiple regression analysis fails to clarify the relationships between the predictor variables and the response variable when the predictors are correlated with each other and are not assigned following a study design.[10]

Extensions edit

Numerous extensions of linear regression have been developed, which allow some or all of the assumptions underlying the basic model to be relaxed.

Simple and multiple linear regression edit

 
Example of simple linear regression, which has one independent variable

The very simplest case of a single scalar predictor variable x and a single scalar response variable y is known as simple linear regression. The extension to multiple and/or vector-valued predictor variables (denoted with a capital X) is known as multiple linear regression, also known as multivariable linear regression (not to be confused with multivariate linear regression[11]).

Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is

 

for each observation  .

In the formula above we consider n observations of one dependent variable and p independent variables. Thus, Yi is the ith observation of the dependent variable, Xij is ith observation of the jth independent variable, j = 1, 2, ..., p. The values βj represent parameters to be estimated, and εi is the ith independent identically distributed normal error.

In the more general multivariate linear regression, there is one equation of the above form for each of m > 1 dependent variables that share the same set of explanatory variables and hence are estimated simultaneously with each other:

 

for all observations indexed as i = 1, ... , n and for all dependent variables indexed as j = 1, ... , m.

Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression.

General linear models edit

The general linear model considers the situation when the response variable is not a scalar (for each observation) but a vector, yi. Conditional linearity of   is still assumed, with a matrix B replacing the vector β of the classical linear regression model. Multivariate analogues of ordinary least squares (OLS) and generalized least squares (GLS) have been developed. "General linear models" are also called "multivariate linear models". These are not the same as multivariable linear models (also called "multiple linear models").

Heteroscedastic models edit

Various models have been created that allow for heteroscedasticity, i.e. the errors for different response variables may have different variances. For example, weighted least squares is a method for estimating linear regression models when the response variables may have different error variances, possibly with correlated errors. (See also Weighted linear least squares, and Generalized least squares.) Heteroscedasticity-consistent standard errors is an improved method for use with uncorrelated but potentially heteroscedastic errors.

Generalized linear models edit

Generalized linear models (GLMs) are a framework for modeling response variables that are bounded or discrete. This is used, for example:

  • when modeling positive quantities (e.g. prices or populations) that vary over a large scale—which are better described using a skewed distribution such as the log-normal distribution or Poisson distribution (although GLMs are not used for log-normal data, instead the response variable is simply transformed using the logarithm function);
  • when modeling categorical data, such as the choice of a given candidate in an election (which is better described using a Bernoulli distribution/binomial distribution for binary choices, or a categorical distribution/multinomial distribution for multi-way choices), where there are a fixed number of choices that cannot be meaningfully ordered;
  • when modeling ordinal data, e.g. ratings on a scale from 0 to 5, where the different outcomes can be ordered but where the quantity itself may not have any absolute meaning (e.g. a rating of 4 may not be "twice as good" in any objective sense as a rating of 2, but simply indicates that it is better than 2 or 3 but not as good as 5).

Generalized linear models allow for an arbitrary link function, g, that relates the mean of the response variable(s) to the predictors:  . The link function is often related to the distribution of the response, and in particular it typically has the effect of transforming between the   range of the linear predictor and the range of the response variable.

Some common examples of GLMs are:

Single index models[clarification needed] allow some degree of nonlinearity in the relationship between x and y, while preserving the central role of the linear predictor βx as in the classical linear regression model. Under certain conditions, simply applying OLS to data from a single-index model will consistently estimate β up to a proportionality constant.[12]

Hierarchical linear models edit

Hierarchical linear models (or multilevel regression) organizes the data into a hierarchy of regressions, for example where A is regressed on B, and B is regressed on C. It is often used where the variables of interest have a natural hierarchical structure such as in educational statistics, where students are nested in classrooms, classrooms are nested in schools, and schools are nested in some administrative grouping, such as a school district. The response variable might be a measure of student achievement such as a test score, and different covariates would be collected at the classroom, school, and school district levels.

Errors-in-variables edit

Errors-in-variables models (or "measurement error models") extend the traditional linear regression model to allow the predictor variables X to be observed with error. This error causes standard estimators of β to become biased. Generally, the form of bias is an attenuation, meaning that the effects are biased toward zero.

Group effects edit

In a multiple linear regression model

 

parameter   of predictor variable   represents the individual effect of  . It has an interpretation as the expected change in the response variable   when   increases by one unit with other predictor variables held constant. When   is strongly correlated with other predictor variables, it is improbable that   can increase by one unit with other variables held constant. In this case, the interpretation of   becomes problematic as it is based on an improbable condition, and the effect of   cannot be evaluated in isolation.

For a group of predictor variables, say,  , a group effect   is defined as a linear combination of their parameters

 

where   is a weight vector satisfying  . Because of the constraint on  ,   is also referred to as a normalized group effect. A group effect   has an interpretation as the expected change in   when variables in the group   change by the amount  , respectively, at the same time with variables not in the group held constant. It generalizes the individual effect of a variable to a group of variables in that ( ) if  , then the group effect reduces to an individual effect, and ( ) if   and   for  , then the group effect also reduces to an individual effect. A group effect   is said to be meaningful if the underlying simultaneous changes of the   variables   is probable.

Group effects provide a means to study the collective impact of strongly correlated predictor variables in linear regression models. Individual effects of such variables are not well-defined as their parameters do not have good interpretations. Furthermore, when the sample size is not large, none of their parameters can be accurately estimated by the least squares regression due to the multicollinearity problem. Nevertheless, there are meaningful group effects that have good interpretations and can be accurately estimated by the least squares regression. A simple way to identify these meaningful group effects is to use an all positive correlations (APC) arrangement of the strongly correlated variables under which pairwise correlations among these variables are all positive, and standardize all   predictor variables in the model so that they all have mean zero and length one. To illustrate this, suppose that   is a group of strongly correlated variables in an APC arrangement and that they are not strongly correlated with predictor variables outside the group. Let   be the centred   and   be the standardized  . Then, the standardized linear regression model is

 

Parameters   in the original model, including  , are simple functions of   in the standardized model. The standardization of variables does not change their correlations, so   is a group of strongly correlated variables in an APC arrangement and they are not strongly correlated with other predictor variables in the standardized model. A group effect of   is

 

and its minimum-variance unbiased linear estimator is

 

where   is the least squares estimator of  . In particular, the average group effect of the   standardized variables is

 

which has an interpretation as the expected change in   when all   in the strongly correlated group increase by  th of a unit at the same time with variables outside the group held constant. With strong positive correlations and in standardized units, variables in the group are approximately equal, so they are likely to increase at the same time and in similar amount. Thus, the average group effect   is a meaningful effect. It can be accurately estimated by its minimum-variance unbiased linear estimator  , even when individually none of the   can be accurately estimated by  .

Not all group effects are meaningful or can be accurately estimated. For example,   is a special group effect with weights   and   for  , but it cannot be accurately estimated by  . It is also not a meaningful effect. In general, for a group of   strongly correlated predictor variables in an APC arrangement in the standardized model, group effects whose weight vectors   are at or near the centre of the simplex   ( ) are meaningful and can be accurately estimated by their minimum-variance unbiased linear estimators. Effects with weight vectors far away from the centre are not meaningful as such weight vectors represent simultaneous changes of the variables that violate the strong positive correlations of the standardized variables in an APC arrangement. As such, they are not probable. These effects also cannot be accurately estimated.

Applications of the group effects include (1) estimation and inference for meaningful group effects on the response variable, (2) testing for "group significance" of the   variables via testing   versus  , and (3) characterizing the region of the predictor variable space over which predictions by the least squares estimated model are accurate.

A group effect of the original variables   can be expressed as a constant times a group effect of the standardized variables  . The former is meaningful when the latter is. Thus meaningful group effects of the original variables can be found through meaningful group effects of the standardized variables.[13]

Others edit

In Dempster–Shafer theory, or a linear belief function in particular, a linear regression model may be represented as a partially swept matrix, which can be combined with similar matrices representing observations and other assumed normal distributions and state equations. The combination of swept or unswept matrices provides an alternative method for estimating linear regression models.

Estimation methods edit

A large number of procedures have been developed for parameter estimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of a closed-form solution, robustness with respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such as consistency and asymptotic efficiency.

Some of the more common estimation techniques for linear regression are summarized below.

Least-squares estimation and related techniques edit

 
Francis Galton's 1886[14] illustration of the correlation between the heights of adults and their parents. The observation that adult children's heights tended to deviate less from the mean height than their parents suggested the concept of "regression toward the mean", giving regression its name. The "locus of horizontal tangential points" passing through the leftmost and rightmost points on the ellipse (which is a level curve of the bivariate normal distribution estimated from the data) is the OLS estimate of the regression of parents' heights on children's heights, while the "locus of vertical tangential points" is the OLS estimate of the regression of children's heights on parent's heights. The major axis of the ellipse is the TLS estimate.

Assuming that the independent variable is   and the model's parameters are  , then the model's prediction would be

 .

If   is extended to   then   would become a dot product of the parameter and the independent variable, i.e.

 .

In the least-squares setting, the optimum parameter is defined as such that minimizes the sum of mean squared loss:

 

Now putting the independent and dependent variables in matrices   and   respectively, the loss function can be rewritten as:

 

As the loss is convex the optimum solution lies at gradient zero. The gradient of the loss function is (using Denominator layout convention):

 

Setting the gradient to zero produces the optimum parameter:

 

Note: To prove that the   obtained is indeed the local minimum, one needs to differentiate once more to obtain the Hessian matrix and show that it is positive definite. This is provided by the Gauss–Markov theorem.

Linear least squares methods include mainly:

Maximum-likelihood estimation and related techniques edit

  • Maximum likelihood estimation can be performed when the distribution of the error terms is known to belong to a certain parametric family ƒθ of probability distributions.[16] When fθ is a normal distribution with zero mean and variance θ, the resulting estimate is identical to the OLS estimate. GLS estimates are maximum likelihood estimates when ε follows a multivariate normal distribution with a known covariance matrix.
  • Ridge regression[17][18][19] and other forms of penalized estimation, such as Lasso regression,[5] deliberately introduce bias into the estimation of β in order to reduce the variability of the estimate. The resulting estimates generally have lower mean squared error than the OLS estimates, particularly when multicollinearity is present or when overfitting is a problem. They are generally used when the goal is to predict the value of the response variable y for values of the predictors x that have not yet been observed. These methods are not as commonly used when the goal is inference, since it is difficult to account for the bias.
  • Least absolute deviation (LAD) regression is a robust estimation technique in that it is less sensitive to the presence of outliers than OLS (but is less efficient than OLS when no outliers are present). It is equivalent to maximum likelihood estimation under a Laplace distribution model for ε.[20]
  • Adaptive estimation. If we assume that error terms are independent of the regressors,  , then the optimal estimator is the 2-step MLE, where the first step is used to non-parametrically estimate the distribution of the error term.[21]

Other estimation techniques edit

 
Comparison of the Theil–Sen estimator (black) and simple linear regression (blue) for a set of points with outliers
  • Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients β are assumed to be random variables with a specified prior distribution. The prior distribution can bias the solutions for the regression coefficients, in a way similar to (but more general than) ridge regression or lasso regression. In addition, the Bayesian estimation process produces not a single point estimate for the "best" values of the regression coefficients but an entire posterior distribution, completely describing the uncertainty surrounding the quantity. This can be used to estimate the "best" coefficients using the mean, mode, median, any quantile (see quantile regression), or any other function of the posterior distribution.
  • Quantile regression focuses on the conditional quantiles of y given X rather than the conditional mean of y given X. Linear quantile regression models a particular conditional quantile, for example the conditional median, as a linear function βTx of the predictors.
  • Mixed models are widely used to analyze linear regression relationships involving dependent data when the dependencies have a known structure. Common applications of mixed models include analysis of data involving repeated measurements, such as longitudinal data, or data obtained from cluster sampling. They are generally fit as parametric models, using maximum likelihood or Bayesian estimation. In the case where the errors are modeled as normal random variables, there is a close connection between mixed models and generalized least squares.[22] Fixed effects estimation is an alternative approach to analyzing this type of data.
  • Principal component regression (PCR)[7][8] is used when the number of predictor variables is large, or when strong correlations exist among the predictor variables. This two-stage procedure first reduces the predictor variables using principal component analysis, and then uses the reduced variables in an OLS regression fit. While it often works well in practice, there is no general theoretical reason that the most informative linear function of the predictor variables should lie among the dominant principal components of the multivariate distribution of the predictor variables. The partial least squares regression is the extension of the PCR method which does not suffer from the mentioned deficiency.
  • Least-angle regression[6] is an estimation procedure for linear regression models that was developed to handle high-dimensional covariate vectors, potentially with more covariates than observations.
  • The Theil–Sen estimator is a simple robust estimation technique that chooses the slope of the fit line to be the median of the slopes of the lines through pairs of sample points. It has similar statistical efficiency properties to simple linear regression but is much less sensitive to outliers.[23]
  • Other robust estimation techniques, including the α-trimmed mean approach, and L-, M-, S-, and R-estimators have been introduced.

Applications edit

Linear regression is widely used in biological, behavioral and social sciences to describe possible relationships between variables. It ranks as one of the most important tools used in these disciplines.

Trend line edit

A trend line represents a trend, the long-term movement in time series data after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line.

Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data.

Epidemiology edit

Early evidence relating tobacco smoking to mortality and morbidity came from observational studies employing regression analysis. In order to reduce spurious correlations when analyzing observational data, researchers usually include several variables in their regression models in addition to the variable of primary interest. For example, in a regression model in which cigarette smoking is the independent variable of primary interest and the dependent variable is lifespan measured in years, researchers might include education and income as additional independent variables, to ensure that any observed effect of smoking on lifespan is not due to those other socio-economic factors. However, it is never possible to include all possible confounding variables in an empirical analysis. For example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason, randomized controlled trials are often able to generate more compelling evidence of causal relationships than can be obtained using regression analyses of observational data. When controlled experiments are not feasible, variants of regression analysis such as instrumental variables regression may be used to attempt to estimate causal relationships from observational data.

Finance edit

The capital asset pricing model uses linear regression as well as the concept of beta for analyzing and quantifying the systematic risk of an investment. This comes directly from the beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets.

Economics edit

Linear regression is the predominant empirical tool in economics. For example, it is used to predict consumption spending,[24] fixed investment spending, inventory investment, purchases of a country's exports,[25] spending on imports,[25] the demand to hold liquid assets,[26] labor demand,[27] and labor supply.[27]

Environmental science edit

Linear regression finds application in a wide range of environmental science applications such as land use,[28] infectious diseases,[29] air pollution.[30]

Machine learning edit

Linear regression plays an important role in the subfield of artificial intelligence known as machine learning. The linear regression algorithm is one of the fundamental supervised machine-learning algorithms due to its relative simplicity and well-known properties.[31]

History edit

Least squares linear regression, as a means of finding a good rough linear fit to a set of points was performed by Legendre (1805) and Gauss (1809) for the prediction of planetary movement. Quetelet was responsible for making the procedure well-known and for using it extensively in the social sciences.[32]

See also edit

References edit

Citations edit

  1. ^ David A. Freedman (2009). Statistical Models: Theory and Practice. Cambridge University Press. p. 26. A simple regression equation has on the right hand side an intercept and an explanatory variable with a slope coefficient. A multiple regression e right hand side, each with its own slope coefficient
  2. ^ Rencher, Alvin C.; Christensen, William F. (2012), "Chapter 10, Multivariate regression – Section 10.1, Introduction", Methods of Multivariate Analysis, Wiley Series in Probability and Statistics, vol. 709 (3rd ed.), John Wiley & Sons, p. 19, ISBN 9781118391679.
  3. ^ Hilary L. Seal (1967). "The historical development of the Gauss linear model". Biometrika. 54 (1/2): 1–24. doi:10.1093/biomet/54.1-2.1. JSTOR 2333849.
  4. ^ Yan, Xin (2009), Linear Regression Analysis: Theory and Computing, World Scientific, pp. 1–2, ISBN 9789812834119, Regression analysis ... is probably one of the oldest topics in mathematical statistics dating back to about two hundred years ago. The earliest form of the linear regression was the least squares method, which was published by Legendre in 1805, and by Gauss in 1809 ... Legendre and Gauss both applied the method to the problem of determining, from astronomical observations, the orbits of bodies about the sun.
  5. ^ a b Tibshirani, Robert (1996). "Regression Shrinkage and Selection via the Lasso". Journal of the Royal Statistical Society, Series B. 58 (1): 267–288. JSTOR 2346178.
  6. ^ a b Efron, Bradley; Hastie, Trevor; Johnstone, Iain; Tibshirani, Robert (2004). "Least Angle Regression". The Annals of Statistics. 32 (2): 407–451. arXiv:math/0406456. doi:10.1214/009053604000000067. JSTOR 3448465. S2CID 204004121.
  7. ^ a b Hawkins, Douglas M. (1973). "On the Investigation of Alternative Regressions by Principal Component Analysis". Journal of the Royal Statistical Society, Series C. 22 (3): 275–286. doi:10.2307/2346776. JSTOR 2346776.
  8. ^ a b Jolliffe, Ian T. (1982). "A Note on the Use of Principal Components in Regression". Journal of the Royal Statistical Society, Series C. 31 (3): 300–303. doi:10.2307/2348005. JSTOR 2348005.
  9. ^ Williams, Matt; Grajales, Carlos; Kurkiewicz, Dason (2019-11-25). "Assumptions of Multiple Regression: Correcting Two Misconceptions". Practical Assessment, Research, and Evaluation. 18 (1). doi:10.7275/55hn-wk47. ISSN 1531-7714.
  10. ^ Berk, Richard A. (2007). "Regression Analysis: A Constructive Critique". Criminal Justice Review. 32 (3): 301–302. doi:10.1177/0734016807304871. S2CID 145389362.
  11. ^ Hidalgo, Bertha; Goodman, Melody (2012-11-15). "Multivariate or Multivariable Regression?". American Journal of Public Health. 103 (1): 39–40. doi:10.2105/AJPH.2012.300897. ISSN 0090-0036. PMC 3518362. PMID 23153131.
  12. ^ Brillinger, David R. (1977). "The Identification of a Particular Nonlinear Time Series System". Biometrika. 64 (3): 509–515. doi:10.1093/biomet/64.3.509. JSTOR 2345326.
  13. ^ Tsao, Min (2022). "Group least squares regression for linear models with strongly correlated predictor variables". Annals of the Institute of Statistical Mathematics. 75 (2): 233–250. arXiv:1804.02499. doi:10.1007/s10463-022-00841-7. S2CID 237396158.
  14. ^ Galton, Francis (1886). "Regression Towards Mediocrity in Hereditary Stature". The Journal of the Anthropological Institute of Great Britain and Ireland. 15: 246–263. doi:10.2307/2841583. ISSN 0959-5295. JSTOR 2841583.
  15. ^ Britzger, Daniel (2022). "The Linear Template Fit". Eur. Phys. J. C. 82 (8): 731. arXiv:2112.01548. Bibcode:2022EPJC...82..731B. doi:10.1140/epjc/s10052-022-10581-w. S2CID 244896511.
  16. ^ Lange, Kenneth L.; Little, Roderick J. A.; Taylor, Jeremy M. G. (1989). "Robust Statistical Modeling Using the t Distribution" (PDF). Journal of the American Statistical Association. 84 (408): 881–896. doi:10.2307/2290063. JSTOR 2290063.
  17. ^ Swindel, Benee F. (1981). "Geometry of Ridge Regression Illustrated". The American Statistician. 35 (1): 12–15. doi:10.2307/2683577. JSTOR 2683577.
  18. ^ Draper, Norman R.; van Nostrand; R. Craig (1979). "Ridge Regression and James-Stein Estimation: Review and Comments". Technometrics. 21 (4): 451–466. doi:10.2307/1268284. JSTOR 1268284.
  19. ^ Hoerl, Arthur E.; Kennard, Robert W.; Hoerl, Roger W. (1985). "Practical Use of Ridge Regression: A Challenge Met". Journal of the Royal Statistical Society, Series C. 34 (2): 114–120. JSTOR 2347363.
  20. ^ Narula, Subhash C.; Wellington, John F. (1982). "The Minimum Sum of Absolute Errors Regression: A State of the Art Survey". International Statistical Review. 50 (3): 317–326. doi:10.2307/1402501. JSTOR 1402501.
  21. ^ Stone, C. J. (1975). "Adaptive maximum likelihood estimators of a location parameter". The Annals of Statistics. 3 (2): 267–284. doi:10.1214/aos/1176343056. JSTOR 2958945.
  22. ^ Goldstein, H. (1986). "Multilevel Mixed Linear Model Analysis Using Iterative Generalized Least Squares". Biometrika. 73 (1): 43–56. doi:10.1093/biomet/73.1.43. JSTOR 2336270.
  23. ^ Theil, H. (1950). "A rank-invariant method of linear and polynomial regression analysis. I, II, III". Nederl. Akad. Wetensch., Proc. 53: 386–392, 521–525, 1397–1412. MR 0036489.; Sen, Pranab Kumar (1968). "Estimates of the regression coefficient based on Kendall's tau". Journal of the American Statistical Association. 63 (324): 1379–1389. doi:10.2307/2285891. JSTOR 2285891. MR 0258201..
  24. ^ Deaton, Angus (1992). Understanding Consumption. Oxford University Press. ISBN 978-0-19-828824-4.
  25. ^ a b Krugman, Paul R.; Obstfeld, M.; Melitz, Marc J. (2012). International Economics: Theory and Policy (9th global ed.). Harlow: Pearson. ISBN 9780273754091.
  26. ^ Laidler, David E. W. (1993). The Demand for Money: Theories, Evidence, and Problems (4th ed.). New York: Harper Collins. ISBN 978-0065010985.
  27. ^ a b Ehrenberg; Smith (2008). Modern Labor Economics (10th international ed.). London: Addison-Wesley. ISBN 9780321538963.
  28. ^ Hoek, Gerard; Beelen, Rob; de Hoogh, Kees; Vienneau, Danielle; Gulliver, John; Fischer, Paul; Briggs, David (2008-10-01). "A review of land-use regression models to assess spatial variation of outdoor air pollution". Atmospheric Environment. 42 (33): 7561–7578. doi:10.1016/j.atmosenv.2008.05.057. ISSN 1352-2310.
  29. ^ Imai, Chisato; Hashizume, Masahiro (2015). "A Systematic Review of Methodology: Time Series Regression Analysis for Environmental Factors and Infectious Diseases". Tropical Medicine and Health. 43 (1): 1–9. doi:10.2149/tmh.2014-21. hdl:10069/35301.
  30. ^ Milionis, A. E.; Davies, T. D. (1994-09-01). "Regression and stochastic models for air pollution—I. Review, comments and suggestions". Atmospheric Environment. 28 (17): 2801–2810. doi:10.1016/1352-2310(94)90083-3. ISSN 1352-2310.
  31. ^ "Linear Regression (Machine Learning)" (PDF). University of Pittsburgh.
  32. ^ Stigler, Stephen M. (1986). The History of Statistics: The Measurement of Uncertainty before 1900. Cambridge: Harvard. ISBN 0-674-40340-1.

Sources edit

  • Cohen, J., Cohen P., West, S.G., & Aiken, L.S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. (2nd ed.) Hillsdale, NJ: Lawrence Erlbaum Associates
  • Charles Darwin. The Variation of Animals and Plants under Domestication. (1868) (Chapter XIII describes what was known about reversion in Galton's time. Darwin uses the term "reversion".)
  • Draper, N.R.; Smith, H. (1998). Applied Regression Analysis (3rd ed.). John Wiley. ISBN 978-0-471-17082-2.
  • Francis Galton. "Regression Towards Mediocrity in Hereditary Stature," Journal of the Anthropological Institute, 15:246-263 (1886). (Facsimile at: [1])
  • Robert S. Pindyck and Daniel L. Rubinfeld (1998, 4h ed.). Econometric Models and Economic Forecasts, ch. 1 (Intro, incl. appendices on Σ operators & derivation of parameter est.) & Appendix 4.3 (mult. regression in matrix form).

Further reading edit

  • Pedhazur, Elazar J (1982). Multiple regression in behavioral research: Explanation and prediction (2nd ed.). New York: Holt, Rinehart and Winston. ISBN 978-0-03-041760-3.
  • Mathieu Rouaud, 2013: Probability, Statistics and Estimation Chapter 2: Linear Regression, Linear Regression with Error Bars and Nonlinear Regression.
  • National Physical Laboratory (1961). "Chapter 1: Linear Equations and Matrices: Direct Methods". Modern Computing Methods. Notes on Applied Science. Vol. 16 (2nd ed.). Her Majesty's Stationery Office.

External links edit

  • Least-Squares Regression, PhET Interactive simulations, University of Colorado at Boulder
  • DIY Linear Fit

linear, regression, statistics, linear, regression, statistical, model, which, estimates, linear, relationship, between, scalar, response, more, explanatory, variables, also, known, dependent, independent, variables, case, explanatory, variable, called, simple. In statistics linear regression is a statistical model which estimates the linear relationship between a scalar response and one or more explanatory variables also known as dependent and independent variables The case of one explanatory variable is called simple linear regression for more than one the process is called multiple linear regression 1 This term is distinct from multivariate linear regression where multiple correlated dependent variables are predicted rather than a single scalar variable 2 If the explanatory variables are measured with error then errors in variables models are required also known as measurement error models In linear regression the relationships are modeled using linear predictor functions whose unknown model parameters are estimated from the data Such models are called linear models 3 Most commonly the conditional mean of the response given the values of the explanatory variables or predictors is assumed to be an affine function of those values less commonly the conditional median or some other quantile is used Like all forms of regression analysis linear regression focuses on the conditional probability distribution of the response given the values of the predictors rather than on the joint probability distribution of all of these variables which is the domain of multivariate analysis Linear regression was the first type of regression analysis to be studied rigorously and to be used extensively in practical applications 4 This is because models which depend linearly on their unknown parameters are easier to fit than models which are non linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine Linear regression has many practical uses Most applications fall into one of the following two broad categories If the goal is error i e variance reduction in prediction or forecasting linear regression can be used to fit a predictive model to an observed data set of values of the response and explanatory variables After developing such a model if additional values of the explanatory variables are collected without an accompanying response value the fitted model can be used to make a prediction of the response If the goal is to explain variation in the response variable that can be attributed to variation in the explanatory variables linear regression analysis can be applied to quantify the strength of the relationship between the response and the explanatory variables and in particular to determine whether some explanatory variables may have no linear relationship with the response at all or to identify which subsets of explanatory variables may contain redundant information about the response Linear regression models are often fitted using the least squares approach but they may also be fitted in other ways such as by minimizing the lack of fit in some other norm as with least absolute deviations regression or by minimizing a penalized version of the least squares cost function as in ridge regression L2 norm penalty and lasso L1 norm penalty Use of the Mean Squared Error MSE as the cost on a dataset that has many large outliers can result in a model that fits the outliers more than the true data due to the higher importance assigned by MSE to large errors So cost functions that are robust to outliers should be used if the dataset has many large outliers Conversely the least squares approach can be used to fit models that are not linear models Thus although the terms least squares and linear model are closely linked they are not synonymous Contents 1 Formulation 1 1 Notation and terminology 1 2 Example 1 3 Assumptions 1 4 Interpretation 2 Extensions 2 1 Simple and multiple linear regression 2 2 General linear models 2 3 Heteroscedastic models 2 4 Generalized linear models 2 5 Hierarchical linear models 2 6 Errors in variables 2 7 Group effects 2 8 Others 3 Estimation methods 3 1 Least squares estimation and related techniques 3 2 Maximum likelihood estimation and related techniques 3 3 Other estimation techniques 4 Applications 4 1 Trend line 4 2 Epidemiology 4 3 Finance 4 4 Economics 4 5 Environmental science 4 6 Machine learning 5 History 6 See also 7 References 7 1 Citations 7 2 Sources 8 Further reading 9 External linksFormulation edit nbsp In linear regression the observations red are assumed to be the result of random deviations green from an underlying relationship blue between a dependent variable y and an independent variable x Given a data set y i x i 1 x i p i 1 n displaystyle y i x i1 ldots x ip i 1 n nbsp of n statistical units a linear regression model assumes that the relationship between the dependent variable y and the vector of regressors x is linear This relationship is modeled through a disturbance term or error variable e an unobserved random variable that adds noise to the linear relationship between the dependent variable and regressors Thus the model takes the formy i b 0 b 1 x i 1 b p x i p e i x i T b e i i 1 n displaystyle y i beta 0 beta 1 x i1 cdots beta p x ip varepsilon i mathbf x i mathsf T boldsymbol beta varepsilon i qquad i 1 ldots n nbsp where T denotes the transpose so that xiTb is the inner product between vectors xi and b Often these n equations are stacked together and written in matrix notation as y X b e displaystyle mathbf y mathbf X boldsymbol beta boldsymbol varepsilon nbsp where y y 1 y 2 y n displaystyle mathbf y begin bmatrix y 1 y 2 vdots y n end bmatrix quad nbsp X x 1 T x 2 T x n T 1 x 11 x 1 p 1 x 21 x 2 p 1 x n 1 x n p displaystyle mathbf X begin bmatrix mathbf x 1 mathsf T mathbf x 2 mathsf T vdots mathbf x n mathsf T end bmatrix begin bmatrix 1 amp x 11 amp cdots amp x 1p 1 amp x 21 amp cdots amp x 2p vdots amp vdots amp ddots amp vdots 1 amp x n1 amp cdots amp x np end bmatrix nbsp b b 0 b 1 b 2 b p e e 1 e 2 e n displaystyle boldsymbol beta begin bmatrix beta 0 beta 1 beta 2 vdots beta p end bmatrix quad boldsymbol varepsilon begin bmatrix varepsilon 1 varepsilon 2 vdots varepsilon n end bmatrix nbsp Notation and terminology edit y displaystyle mathbf y nbsp is a vector of observed values y i i 1 n displaystyle y i i 1 ldots n nbsp of the variable called the regressand endogenous variable response variable target variable measured variable criterion variable or dependent variable This variable is also sometimes known as the predicted variable but this should not be confused with predicted values which are denoted y displaystyle hat y nbsp The decision as to which variable in a data set is modeled as the dependent variable and which are modeled as the independent variables may be based on a presumption that the value of one of the variables is caused by or directly influenced by the other variables Alternatively there may be an operational reason to model one of the variables in terms of the others in which case there need be no presumption of causality X displaystyle mathbf X nbsp may be seen as a matrix of row vectors x i displaystyle mathbf x i cdot nbsp or of n dimensional column vectors x j displaystyle mathbf x cdot j nbsp which are known as regressors exogenous variables explanatory variables covariates input variables predictor variables or independent variables not to be confused with the concept of independent random variables The matrix X displaystyle mathbf X nbsp is sometimes called the design matrix Usually a constant is included as one of the regressors In particular x i 0 1 displaystyle x i0 1 nbsp for i 1 n displaystyle i 1 ldots n nbsp The corresponding element of b is called the intercept Many statistical inference procedures for linear models require an intercept to be present so it is often included even if theoretical considerations suggest that its value should be zero Sometimes one of the regressors can be a non linear function of another regressor or of the data values as in polynomial regression and segmented regression The model remains linear as long as it is linear in the parameter vector b The values xij may be viewed as either observed values of random variables Xj or as fixed values chosen prior to observing the dependent variable Both interpretations may be appropriate in different cases and they generally lead to the same estimation procedures however different approaches to asymptotic analysis are used in these two situations b displaystyle boldsymbol beta nbsp is a p 1 displaystyle p 1 nbsp dimensional parameter vector where b 0 displaystyle beta 0 nbsp is the intercept term if one is included in the model otherwise b displaystyle boldsymbol beta nbsp is p dimensional Its elements are known as effects or regression coefficients although the latter term is sometimes reserved for the estimated effects In simple linear regression p 1 and the coefficient is known as regression slope Statistical estimation and inference in linear regression focuses on b The elements of this parameter vector are interpreted as the partial derivatives of the dependent variable with respect to the various independent variables e displaystyle boldsymbol varepsilon nbsp is a vector of values e i displaystyle varepsilon i nbsp This part of the model is called the error term disturbance term or sometimes noise in contrast with the signal provided by the rest of the model This variable captures all other factors which influence the dependent variable y other than the regressors x The relationship between the error term and the regressors for example their correlation is a crucial consideration in formulating a linear regression model as it will determine the appropriate estimation method Fitting a linear model to a given data set usually requires estimating the regression coefficients b displaystyle boldsymbol beta nbsp such that the error term e y X b displaystyle boldsymbol varepsilon mathbf y mathbf X boldsymbol beta nbsp is minimized For example it is common to use the sum of squared errors e 2 2 displaystyle boldsymbol varepsilon 2 2 nbsp as a measure of e displaystyle boldsymbol varepsilon nbsp for minimization Example edit Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascent hi at various moments in time ti Physics tells us that ignoring the drag the relationship can be modeled as h i b 1 t i b 2 t i 2 e i displaystyle h i beta 1 t i beta 2 t i 2 varepsilon i nbsp where b1 determines the initial velocity of the ball b2 is proportional to the standard gravity and ei is due to measurement errors Linear regression can be used to estimate the values of b1 and b2 from the measured data This model is non linear in the time variable but it is linear in the parameters b1 and b2 if we take regressors xi xi1 xi2 ti ti2 the model takes on the standard form h i x i T b e i displaystyle h i mathbf x i mathsf T boldsymbol beta varepsilon i nbsp Assumptions edit See also Ordinary least squares Assumptions Standard linear regression models with standard estimation techniques make a number of assumptions about the predictor variables the response variables and their relationship Numerous extensions have been developed that allow each of these assumptions to be relaxed i e reduced to a weaker form and in some cases eliminated entirely Generally these extensions make the estimation procedure more complex and time consuming and may also require more data in order to produce an equally precise model citation needed nbsp Example of a cubic polynomial regression which is a type of linear regression Although polynomial regression fits a nonlinear model to the data as a statistical estimation problem it is linear in the sense that the regression function E y x is linear in the unknown parameters that are estimated from the data For this reason polynomial regression is considered to be a special case of multiple linear regression The following are the major assumptions made by standard linear regression models with standard estimation techniques e g ordinary least squares Weak exogeneity This essentially means that the predictor variables x can be treated as fixed values rather than random variables This means for example that the predictor variables are assumed to be error free that is not contaminated with measurement errors Although this assumption is not realistic in many settings dropping it leads to significantly more difficult errors in variables models Linearity This means that the mean of the response variable is a linear combination of the parameters regression coefficients and the predictor variables Note that this assumption is much less restrictive than it may at first seem Because the predictor variables are treated as fixed values see above linearity is really only a restriction on the parameters The predictor variables themselves can be arbitrarily transformed and in fact multiple copies of the same underlying predictor variable can be added each one transformed differently This technique is used for example in polynomial regression which uses linear regression to fit the response variable as an arbitrary polynomial function up to a given degree of a predictor variable With this much flexibility models such as polynomial regression often have too much power in that they tend to overfit the data As a result some kind of regularization must typically be used to prevent unreasonable solutions coming out of the estimation process Common examples are ridge regression and lasso regression Bayesian linear regression can also be used which by its nature is more or less immune to the problem of overfitting In fact ridge regression and lasso regression can both be viewed as special cases of Bayesian linear regression with particular types of prior distributions placed on the regression coefficients nbsp Visualization of heteroscedasticity in a scatter plot against 100 random fitted values using MatlabConstant variance a k a homoscedasticity This means that the variance of the errors does not depend on the values of the predictor variables Thus the variability of the responses for given fixed values of the predictors is the same regardless of how large or small the responses are This is often not the case as a variable whose mean is large will typically have a greater variance than one whose mean is small For example a person whose income is predicted to be 100 000 may easily have an actual income of 80 000 or 120 000 i e a standard deviation of around 20 000 while another person with a predicted income of 10 000 is unlikely to have the same 20 000 standard deviation since that would imply their actual income could vary anywhere between 10 000 and 30 000 In fact as this shows in many cases often the same cases where the assumption of normally distributed errors fails the variance or standard deviation should be predicted to be proportional to the mean rather than constant The absence of homoscedasticity is called heteroscedasticity In order to check this assumption a plot of residuals versus predicted values or the values of each individual predictor can be examined for a fanning effect i e increasing or decreasing vertical spread as one moves left to right on the plot A plot of the absolute or squared residuals versus the predicted values or each predictor can also be examined for a trend or curvature Formal tests can also be used see Heteroscedasticity The presence of heteroscedasticity will result in an overall average estimate of variance being used instead of one that takes into account the true variance structure This leads to less precise but in the case of ordinary least squares not biased parameter estimates and biased standard errors resulting in misleading tests and interval estimates The mean squared error for the model will also be wrong Various estimation techniques including weighted least squares and the use of heteroscedasticity consistent standard errors can handle heteroscedasticity in a quite general way Bayesian linear regression techniques can also be used when the variance is assumed to be a function of the mean It is also possible in some cases to fix the problem by applying a transformation to the response variable e g fitting the logarithm of the response variable using a linear regression model which implies that the response variable itself has a log normal distribution rather than a normal distribution nbsp To check for violations of the assumptions of linearity constant variance and independence of errors within a linear regression model the residuals are typically plotted against the predicted values or each of the individual predictors An apparently random scatter of points about the horizontal midline at 0 is ideal but cannot rule out certain kinds of violations such as autocorrelation in the errors or their correlation with one or more covariates Independence of errors This assumes that the errors of the response variables are uncorrelated with each other Actual statistical independence is a stronger condition than mere lack of correlation and is often not needed although it can be exploited if it is known to hold Some methods such as generalized least squares are capable of handling correlated errors although they typically require significantly more data unless some sort of regularization is used to bias the model towards assuming uncorrelated errors Bayesian linear regression is a general way of handling this issue Lack of perfect multicollinearity in the predictors For standard least squares estimation methods the design matrix X must have full column rank p otherwise perfect multicollinearity exists in the predictor variables meaning a linear relationship exists between two or more predictor variables This can be caused by accidentally duplicating a variable in the data using a linear transformation of a variable along with the original e g the same temperature measurements expressed in Fahrenheit and Celsius or including a linear combination of multiple variables in the model such as their mean It can also happen if there is too little data available compared to the number of parameters to be estimated e g fewer data points than regression coefficients Near violations of this assumption where predictors are highly but not perfectly correlated can reduce the precision of parameter estimates see Variance inflation factor In the case of perfect multicollinearity the parameter vector b will be non identifiable it has no unique solution In such a case only some of the parameters can be identified i e their values can only be estimated within some linear subspace of the full parameter space Rp See partial least squares regression Methods for fitting linear models with multicollinearity have been developed 5 6 7 8 some of which require additional assumptions such as effect sparsity that a large fraction of the effects are exactly zero Note that the more computationally expensive iterated algorithms for parameter estimation such as those used in generalized linear models do not suffer from this problem Assumption of Zero Mean of Residuals In regression analysis another critical assumption is that the mean of the residuals is zero or close to zero This assumption is fundamental for the validity of any conclusions drawn from the least squares estimates of the parameters Residuals are the differences between the observed values and the values predicted by the model If the mean of these residuals is not zero it implies that the model consistently overestimates or underestimates the observed values indicating a potential bias in the model estimation Ensuring that the mean of the residuals is zero allows the model to be considered unbiased in terms of its error which is crucial for the accurate interpretation of the regression coefficients Violations of these assumptions can result in biased estimations of b biased standard errors untrustworthy confidence intervals and significance tests 9 Beyond these assumptions several other statistical properties of the data strongly influence the performance of different estimation methods The statistical relationship between the error terms and the regressors plays an important role in determining whether an estimation procedure has desirable sampling properties such as being unbiased and consistent The arrangement or probability distribution of the predictor variables x has a major influence on the precision of estimates of b Sampling and design of experiments are highly developed subfields of statistics that provide guidance for collecting data in such a way to achieve a precise estimate of b For an example on how to test these assumptions in practical scenarios see this comprehensive guide on Kaggle Regression Basics with Assumption Testing Interpretation edit nbsp The data sets in the Anscombe s quartet are designed to have approximately the same linear regression line as well as nearly identical means standard deviations and correlations but are graphically very different This illustrates the pitfalls of relying solely on a fitted model to understand the relationship between variables A fitted linear regression model can be used to identify the relationship between a single predictor variable xj and the response variable y when all the other predictor variables in the model are held fixed Specifically the interpretation of bj is the expected change in y for a one unit change in xj when the other covariates are held fixed that is the expected value of the partial derivative of y with respect to xj This is sometimes called the unique effect of xj on y In contrast the marginal effect of xj on y can be assessed using a correlation coefficient or simple linear regression model relating only xj to y this effect is the total derivative of y with respect to xj Care must be taken when interpreting regression results as some of the regressors may not allow for marginal changes such as dummy variables or the intercept term while others cannot be held fixed recall the example from the introduction it would be impossible to hold ti fixed and at the same time change the value of ti2 It is possible that the unique effect can be nearly zero even when the marginal effect is large This may imply that some other covariate captures all the information in xj so that once that variable is in the model there is no contribution of xj to the variation in y Conversely the unique effect of xj can be large while its marginal effect is nearly zero This would happen if the other covariates explained a great deal of the variation of y but they mainly explain variation in a way that is complementary to what is captured by xj In this case including the other variables in the model reduces the part of the variability of y that is unrelated to xj thereby strengthening the apparent relationship with xj The meaning of the expression held fixed may depend on how the values of the predictor variables arise If the experimenter directly sets the values of the predictor variables according to a study design the comparisons of interest may literally correspond to comparisons among units whose predictor variables have been held fixed by the experimenter Alternatively the expression held fixed can refer to a selection that takes place in the context of data analysis In this case we hold a variable fixed by restricting our attention to the subsets of the data that happen to have a common value for the given predictor variable This is the only interpretation of held fixed that can be used in an observational study The notion of a unique effect is appealing when studying a complex system where multiple interrelated components influence the response variable In some cases it can literally be interpreted as the causal effect of an intervention that is linked to the value of a predictor variable However it has been argued that in many cases multiple regression analysis fails to clarify the relationships between the predictor variables and the response variable when the predictors are correlated with each other and are not assigned following a study design 10 Extensions editNumerous extensions of linear regression have been developed which allow some or all of the assumptions underlying the basic model to be relaxed Simple and multiple linear regression edit nbsp Example of simple linear regression which has one independent variable The very simplest case of a single scalar predictor variable x and a single scalar response variable y is known as simple linear regression The extension to multiple and or vector valued predictor variables denoted with a capital X is known as multiple linear regression also known as multivariable linear regression not to be confused with multivariate linear regression 11 Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable and a special case of general linear models restricted to one dependent variable The basic model for multiple linear regression is Y i b 0 b 1 X i 1 b 2 X i 2 b p X i p ϵ i displaystyle Y i beta 0 beta 1 X i1 beta 2 X i2 ldots beta p X ip epsilon i nbsp for each observation i 1 n textstyle i 1 ldots n nbsp In the formula above we consider n observations of one dependent variable and p independent variables Thus Yi is the ith observation of the dependent variable Xij is ith observation of the jth independent variable j 1 2 p The values bj represent parameters to be estimated and ei is the ith independent identically distributed normal error In the more general multivariate linear regression there is one equation of the above form for each of m gt 1 dependent variables that share the same set of explanatory variables and hence are estimated simultaneously with each other Y i j b 0 j b 1 j X i 1 b 2 j X i 2 b p j X i p ϵ i j displaystyle Y ij beta 0j beta 1j X i1 beta 2j X i2 ldots beta pj X ip epsilon ij nbsp for all observations indexed as i 1 n and for all dependent variables indexed as j 1 m Nearly all real world regression models involve multiple predictors and basic descriptions of linear regression are often phrased in terms of the multiple regression model Note however that in these cases the response variable y is still a scalar Another term multivariate linear regression refers to cases where y is a vector i e the same as general linear regression General linear models edit The general linear model considers the situation when the response variable is not a scalar for each observation but a vector yi Conditional linearity of E y x i x i T B displaystyle E mathbf y mid mathbf x i mathbf x i mathsf T B nbsp is still assumed with a matrix B replacing the vector b of the classical linear regression model Multivariate analogues of ordinary least squares OLS and generalized least squares GLS have been developed General linear models are also called multivariate linear models These are not the same as multivariable linear models also called multiple linear models Heteroscedastic models edit Various models have been created that allow for heteroscedasticity i e the errors for different response variables may have different variances For example weighted least squares is a method for estimating linear regression models when the response variables may have different error variances possibly with correlated errors See also Weighted linear least squares and Generalized least squares Heteroscedasticity consistent standard errors is an improved method for use with uncorrelated but potentially heteroscedastic errors Generalized linear models edit Generalized linear models GLMs are a framework for modeling response variables that are bounded or discrete This is used for example when modeling positive quantities e g prices or populations that vary over a large scale which are better described using a skewed distribution such as the log normal distribution or Poisson distribution although GLMs are not used for log normal data instead the response variable is simply transformed using the logarithm function when modeling categorical data such as the choice of a given candidate in an election which is better described using a Bernoulli distribution binomial distribution for binary choices or a categorical distribution multinomial distribution for multi way choices where there are a fixed number of choices that cannot be meaningfully ordered when modeling ordinal data e g ratings on a scale from 0 to 5 where the different outcomes can be ordered but where the quantity itself may not have any absolute meaning e g a rating of 4 may not be twice as good in any objective sense as a rating of 2 but simply indicates that it is better than 2 or 3 but not as good as 5 Generalized linear models allow for an arbitrary link function g that relates the mean of the response variable s to the predictors E Y g 1 X B displaystyle E Y g 1 XB nbsp The link function is often related to the distribution of the response and in particular it typically has the effect of transforming between the displaystyle infty infty nbsp range of the linear predictor and the range of the response variable Some common examples of GLMs are Poisson regression for count data Logistic regression and probit regression for binary data Multinomial logistic regression and multinomial probit regression for categorical data Ordered logit and ordered probit regression for ordinal data Single index models clarification needed allow some degree of nonlinearity in the relationship between x and y while preserving the central role of the linear predictor b x as in the classical linear regression model Under certain conditions simply applying OLS to data from a single index model will consistently estimate b up to a proportionality constant 12 Hierarchical linear models edit Hierarchical linear models or multilevel regression organizes the data into a hierarchy of regressions for example where A is regressed on B and B is regressed on C It is often used where the variables of interest have a natural hierarchical structure such as in educational statistics where students are nested in classrooms classrooms are nested in schools and schools are nested in some administrative grouping such as a school district The response variable might be a measure of student achievement such as a test score and different covariates would be collected at the classroom school and school district levels Errors in variables edit Errors in variables models or measurement error models extend the traditional linear regression model to allow the predictor variables X to be observed with error This error causes standard estimators of b to become biased Generally the form of bias is an attenuation meaning that the effects are biased toward zero Group effects edit Further information Multicollinearity This section may lend undue weight to certain ideas incidents or controversies The specific problem is Overly detailed Please help improve it by rewriting it in a balanced fashion that contextualizes different points of view September 2023 Learn how and when to remove this template message In a multiple linear regression model y b 0 b 1 x 1 b p x p e displaystyle y beta 0 beta 1 x 1 cdots beta p x p varepsilon nbsp parameter b j displaystyle beta j nbsp of predictor variable x j displaystyle x j nbsp represents the individual effect of x j displaystyle x j nbsp It has an interpretation as the expected change in the response variable y displaystyle y nbsp when x j displaystyle x j nbsp increases by one unit with other predictor variables held constant When x j displaystyle x j nbsp is strongly correlated with other predictor variables it is improbable that x j displaystyle x j nbsp can increase by one unit with other variables held constant In this case the interpretation of b j displaystyle beta j nbsp becomes problematic as it is based on an improbable condition and the effect of x j displaystyle x j nbsp cannot be evaluated in isolation For a group of predictor variables say x 1 x 2 x q displaystyle x 1 x 2 dots x q nbsp a group effect 3 w displaystyle xi mathbf w nbsp is defined as a linear combination of their parameters 3 w w 1 b 1 w 2 b 2 w q b q displaystyle xi mathbf w w 1 beta 1 w 2 beta 2 dots w q beta q nbsp where w w 1 w 2 w q displaystyle mathbf w w 1 w 2 dots w q intercal nbsp is a weight vector satisfying j 1 q w j 1 textstyle sum j 1 q w j 1 nbsp Because of the constraint on w j displaystyle w j nbsp 3 w displaystyle xi mathbf w nbsp is also referred to as a normalized group effect A group effect 3 w displaystyle xi mathbf w nbsp has an interpretation as the expected change in y displaystyle y nbsp when variables in the group x 1 x 2 x q displaystyle x 1 x 2 dots x q nbsp change by the amount w 1 w 2 w q displaystyle w 1 w 2 dots w q nbsp respectively at the same time with variables not in the group held constant It generalizes the individual effect of a variable to a group of variables in that i displaystyle i nbsp if q 1 displaystyle q 1 nbsp then the group effect reduces to an individual effect and i i displaystyle ii nbsp if w i 1 displaystyle w i 1 nbsp and w j 0 displaystyle w j 0 nbsp for j i displaystyle j neq i nbsp then the group effect also reduces to an individual effect A group effect 3 w displaystyle xi mathbf w nbsp is said to be meaningful if the underlying simultaneous changes of the q displaystyle q nbsp variables w 1 w 2 w q displaystyle w 1 w 2 dots w q intercal nbsp is probable Group effects provide a means to study the collective impact of strongly correlated predictor variables in linear regression models Individual effects of such variables are not well defined as their parameters do not have good interpretations Furthermore when the sample size is not large none of their parameters can be accurately estimated by the least squares regression due to the multicollinearity problem Nevertheless there are meaningful group effects that have good interpretations and can be accurately estimated by the least squares regression A simple way to identify these meaningful group effects is to use an all positive correlations APC arrangement of the strongly correlated variables under which pairwise correlations among these variables are all positive and standardize all p displaystyle p nbsp predictor variables in the model so that they all have mean zero and length one To illustrate this suppose that x 1 x 2 x q displaystyle x 1 x 2 dots x q nbsp is a group of strongly correlated variables in an APC arrangement and that they are not strongly correlated with predictor variables outside the group Let y displaystyle y nbsp be the centred y displaystyle y nbsp and x j displaystyle x j nbsp be the standardized x j displaystyle x j nbsp Then the standardized linear regression model is y b 1 x 1 b p x p e displaystyle y beta 1 x 1 cdots beta p x p varepsilon nbsp Parameters b j displaystyle beta j nbsp in the original model including b 0 displaystyle beta 0 nbsp are simple functions of b j displaystyle beta j nbsp in the standardized model The standardization of variables does not change their correlations so x 1 x 2 x q displaystyle x 1 x 2 dots x q nbsp is a group of strongly correlated variables in an APC arrangement and they are not strongly correlated with other predictor variables in the standardized model A group effect of x 1 x 2 x q displaystyle x 1 x 2 dots x q nbsp is 3 w w 1 b 1 w 2 b 2 w q b q displaystyle xi mathbf w w 1 beta 1 w 2 beta 2 dots w q beta q nbsp and its minimum variance unbiased linear estimator is 3 w w 1 b 1 w 2 b 2 w q b q displaystyle hat xi mathbf w w 1 hat beta 1 w 2 hat beta 2 dots w q hat beta q nbsp where b j displaystyle hat beta j nbsp is the least squares estimator of b j displaystyle beta j nbsp In particular the average group effect of the q displaystyle q nbsp standardized variables is 3 A 1 q b 1 b 2 b q displaystyle xi A frac 1 q beta 1 beta 2 dots beta q nbsp which has an interpretation as the expected change in y displaystyle y nbsp when all x j displaystyle x j nbsp in the strongly correlated group increase by 1 q displaystyle 1 q nbsp th of a unit at the same time with variables outside the group held constant With strong positive correlations and in standardized units variables in the group are approximately equal so they are likely to increase at the same time and in similar amount Thus the average group effect 3 A displaystyle xi A nbsp is a meaningful effect It can be accurately estimated by its minimum variance unbiased linear estimator 3 A 1 q b 1 b 2 b q textstyle hat xi A frac 1 q hat beta 1 hat beta 2 dots hat beta q nbsp even when individually none of the b j displaystyle beta j nbsp can be accurately estimated by b j displaystyle hat beta j nbsp Not all group effects are meaningful or can be accurately estimated For example b 1 displaystyle beta 1 nbsp is a special group effect with weights w 1 1 displaystyle w 1 1 nbsp and w j 0 displaystyle w j 0 nbsp for j 1 displaystyle j neq 1 nbsp but it cannot be accurately estimated by b 1 displaystyle hat beta 1 nbsp It is also not a meaningful effect In general for a group of q displaystyle q nbsp strongly correlated predictor variables in an APC arrangement in the standardized model group effects whose weight vectors w displaystyle mathbf w nbsp are at or near the centre of the simplex j 1 q w j 1 textstyle sum j 1 q w j 1 nbsp w j 0 displaystyle w j geq 0 nbsp are meaningful and can be accurately estimated by their minimum variance unbiased linear estimators Effects with weight vectors far away from the centre are not meaningful as such weight vectors represent simultaneous changes of the variables that violate the strong positive correlations of the standardized variables in an APC arrangement As such they are not probable These effects also cannot be accurately estimated Applications of the group effects include 1 estimation and inference for meaningful group effects on the response variable 2 testing for group significance of the q displaystyle q nbsp variables via testing H 0 3 A 0 displaystyle H 0 xi A 0 nbsp versus H 1 3 A 0 displaystyle H 1 xi A neq 0 nbsp and 3 characterizing the region of the predictor variable space over which predictions by the least squares estimated model are accurate A group effect of the original variables x 1 x 2 x q displaystyle x 1 x 2 dots x q nbsp can be expressed as a constant times a group effect of the standardized variables x 1 x 2 x q displaystyle x 1 x 2 dots x q nbsp The former is meaningful when the latter is Thus meaningful group effects of the original variables can be found through meaningful group effects of the standardized variables 13 Others edit In Dempster Shafer theory or a linear belief function in particular a linear regression model may be represented as a partially swept matrix which can be combined with similar matrices representing observations and other assumed normal distributions and state equations The combination of swept or unswept matrices provides an alternative method for estimating linear regression models Estimation methods editA large number of procedures have been developed for parameter estimation and inference in linear regression These methods differ in computational simplicity of algorithms presence of a closed form solution robustness with respect to heavy tailed distributions and theoretical assumptions needed to validate desirable statistical properties such as consistency and asymptotic efficiency Some of the more common estimation techniques for linear regression are summarized below Least squares estimation and related techniques edit Main article Linear least squares nbsp Francis Galton s 1886 14 illustration of the correlation between the heights of adults and their parents The observation that adult children s heights tended to deviate less from the mean height than their parents suggested the concept of regression toward the mean giving regression its name The locus of horizontal tangential points passing through the leftmost and rightmost points on the ellipse which is a level curve of the bivariate normal distribution estimated from the data is the OLS estimate of the regression of parents heights on children s heights while the locus of vertical tangential points is the OLS estimate of the regression of children s heights on parent s heights The major axis of the ellipse is the TLS estimate Assuming that the independent variable is x i x 1 i x 2 i x m i displaystyle vec x i left x 1 i x 2 i ldots x m i right nbsp and the model s parameters are b b 0 b 1 b m displaystyle vec beta left beta 0 beta 1 ldots beta m right nbsp then the model s prediction would be y i b 0 j 1 m b j x j i displaystyle y i approx beta 0 sum j 1 m beta j times x j i nbsp If x i displaystyle vec x i nbsp is extended to x i 1 x 1 i x 2 i x m i displaystyle vec x i left 1 x 1 i x 2 i ldots x m i right nbsp then y i displaystyle y i nbsp would become a dot product of the parameter and the independent variable i e y i j 0 m b j x j i b x i displaystyle y i approx sum j 0 m beta j times x j i vec beta cdot vec x i nbsp In the least squares setting the optimum parameter is defined as such that minimizes the sum of mean squared loss b arg min b L D b arg min b i 1 n b x i y i 2 displaystyle vec hat beta underset vec beta mbox arg min L left D vec beta right underset vec beta mbox arg min sum i 1 n left vec beta cdot vec x i y i right 2 nbsp Now putting the independent and dependent variables in matrices X displaystyle X nbsp and Y displaystyle Y nbsp respectively the loss function can be rewritten as L D b X b Y 2 X b Y T X b Y Y T Y Y T X b b T X T Y b T X T X b displaystyle begin aligned L left D vec beta right amp X vec beta Y 2 amp left X vec beta Y right textsf T left X vec beta Y right amp Y textsf T Y Y textsf T X vec beta vec beta textsf T X textsf T Y vec beta textsf T X textsf T X vec beta end aligned nbsp As the loss is convex the optimum solution lies at gradient zero The gradient of the loss function is using Denominator layout convention L D b b Y T Y Y T X b b T X T Y b T X T X b b 2 X T Y 2 X T X b displaystyle begin aligned frac partial L left D vec beta right partial vec beta amp frac partial left Y textsf T Y Y textsf T X vec beta vec beta textsf T X textsf T Y vec beta textsf T X textsf T X vec beta right partial vec beta amp 2X textsf T Y 2X textsf T X vec beta end aligned nbsp Setting the gradient to zero produces the optimum parameter 2 X T Y 2 X T X b 0 X T X b X T Y b X T X 1 X T Y displaystyle begin aligned 2X textsf T Y 2X textsf T X vec beta amp 0 Rightarrow X textsf T X vec beta amp X textsf T Y Rightarrow vec hat beta amp left X textsf T X right 1 X textsf T Y end aligned nbsp Note To prove that the b displaystyle hat beta nbsp obtained is indeed the local minimum one needs to differentiate once more to obtain the Hessian matrix and show that it is positive definite This is provided by the Gauss Markov theorem Linear least squares methods include mainly Ordinary least squares Weighted least squares Generalized least squares Linear Template Fit 15 Maximum likelihood estimation and related techniques edit Maximum likelihood estimation can be performed when the distribution of the error terms is known to belong to a certain parametric family ƒ8 of probability distributions 16 When f8 is a normal distribution with zero mean and variance 8 the resulting estimate is identical to the OLS estimate GLS estimates are maximum likelihood estimates when e follows a multivariate normal distribution with a known covariance matrix Ridge regression 17 18 19 and other forms of penalized estimation such as Lasso regression 5 deliberately introduce bias into the estimation of b in order to reduce the variability of the estimate The resulting estimates generally have lower mean squared error than the OLS estimates particularly when multicollinearity is present or when overfitting is a problem They are generally used when the goal is to predict the value of the response variable y for values of the predictors x that have not yet been observed These methods are not as commonly used when the goal is inference since it is difficult to account for the bias Least absolute deviation LAD regression is a robust estimation technique in that it is less sensitive to the presence of outliers than OLS but is less efficient than OLS when no outliers are present It is equivalent to maximum likelihood estimation under a Laplace distribution model for e 20 Adaptive estimation If we assume that error terms are independent of the regressors e i x i displaystyle varepsilon i perp mathbf x i nbsp then the optimal estimator is the 2 step MLE where the first step is used to non parametrically estimate the distribution of the error term 21 Other estimation techniques edit nbsp Comparison of the Theil Sen estimator black and simple linear regression blue for a set of points with outliers Bayesian linear regression applies the framework of Bayesian statistics to linear regression See also Bayesian multivariate linear regression In particular the regression coefficients b are assumed to be random variables with a specified prior distribution The prior distribution can bias the solutions for the regression coefficients in a way similar to but more general than ridge regression or lasso regression In addition the Bayesian estimation process produces not a single point estimate for the best values of the regression coefficients but an entire posterior distribution completely describing the uncertainty surrounding the quantity This can be used to estimate the best coefficients using the mean mode median any quantile see quantile regression or any other function of the posterior distribution Quantile regression focuses on the conditional quantiles of y given X rather than the conditional mean of y given X Linear quantile regression models a particular conditional quantile for example the conditional median as a linear function bTx of the predictors Mixed models are widely used to analyze linear regression relationships involving dependent data when the dependencies have a known structure Common applications of mixed models include analysis of data involving repeated measurements such as longitudinal data or data obtained from cluster sampling They are generally fit as parametric models using maximum likelihood or Bayesian estimation In the case where the errors are modeled as normal random variables there is a close connection between mixed models and generalized least squares 22 Fixed effects estimation is an alternative approach to analyzing this type of data Principal component regression PCR 7 8 is used when the number of predictor variables is large or when strong correlations exist among the predictor variables This two stage procedure first reduces the predictor variables using principal component analysis and then uses the reduced variables in an OLS regression fit While it often works well in practice there is no general theoretical reason that the most informative linear function of the predictor variables should lie among the dominant principal components of the multivariate distribution of the predictor variables The partial least squares regression is the extension of the PCR method which does not suffer from the mentioned deficiency Least angle regression 6 is an estimation procedure for linear regression models that was developed to handle high dimensional covariate vectors potentially with more covariates than observations The Theil Sen estimator is a simple robust estimation technique that chooses the slope of the fit line to be the median of the slopes of the lines through pairs of sample points It has similar statistical efficiency properties to simple linear regression but is much less sensitive to outliers 23 Other robust estimation techniques including the a trimmed mean approach and L M S and R estimators have been introduced Applications editSee also suppresses suppresses and Suppressed Duck Linear regression is widely used in biological behavioral and social sciences to describe possible relationships between variables It ranks as one of the most important tools used in these disciplines Trend line edit Main article Trend estimation A trend line represents a trend the long term movement in time series data after other components have been accounted for It tells whether a particular data set say GDP oil prices or stock prices have increased or decreased over the period of time A trend line could simply be drawn by eye through a set of data points but more properly their position and slope is calculated using statistical techniques like linear regression Trend lines typically are straight lines although some variations use higher degree polynomials depending on the degree of curvature desired in the line Trend lines are sometimes used in business analytics to show changes in data over time This has the advantage of being simple Trend lines are often used to argue that a particular action or event such as training or an advertising campaign caused observed changes at a point in time This is a simple technique and does not require a control group experimental design or a sophisticated analysis technique However it suffers from a lack of scientific validity in cases where other potential changes can affect the data Epidemiology edit Early evidence relating tobacco smoking to mortality and morbidity came from observational studies employing regression analysis In order to reduce spurious correlations when analyzing observational data researchers usually include several variables in their regression models in addition to the variable of primary interest For example in a regression model in which cigarette smoking is the independent variable of primary interest and the dependent variable is lifespan measured in years researchers might include education and income as additional independent variables to ensure that any observed effect of smoking on lifespan is not due to those other socio economic factors However it is never possible to include all possible confounding variables in an empirical analysis For example a hypothetical gene might increase mortality and also cause people to smoke more For this reason randomized controlled trials are often able to generate more compelling evidence of causal relationships than can be obtained using regression analyses of observational data When controlled experiments are not feasible variants of regression analysis such as instrumental variables regression may be used to attempt to estimate causal relationships from observational data Finance edit The capital asset pricing model uses linear regression as well as the concept of beta for analyzing and quantifying the systematic risk of an investment This comes directly from the beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets Economics edit Main article Econometrics Linear regression is the predominant empirical tool in economics For example it is used to predict consumption spending 24 fixed investment spending inventory investment purchases of a country s exports 25 spending on imports 25 the demand to hold liquid assets 26 labor demand 27 and labor supply 27 Environmental science edit This section needs expansion You can help by adding to it April 2024 Linear regression finds application in a wide range of environmental science applications such as land use 28 infectious diseases 29 air pollution 30 Machine learning edit Linear regression plays an important role in the subfield of artificial intelligence known as machine learning The linear regression algorithm is one of the fundamental supervised machine learning algorithms due to its relative simplicity and well known properties 31 History editLeast squares linear regression as a means of finding a good rough linear fit to a set of points was performed by Legendre 1805 and Gauss 1809 for the prediction of planetary movement Quetelet was responsible for making the procedure well known and for using it extensively in the social sciences 32 See also edit nbsp Mathematics portal Analysis of variance Blinder Oaxaca decomposition Censored regression model Cross sectional regression Curve fitting Empirical Bayes method Errors and residuals Lack of fit sum of squares Line fitting Linear classifier Linear equation Logistic regression M estimator Multivariate adaptive regression spline Nonlinear regression Nonparametric regression Normal equations Projection pursuit regression Response modeling methodology Segmented linear regression Standard deviation line Stepwise regression Structural break Support vector machine Truncated regression model Deming regressionReferences editCitations edit David A Freedman 2009 Statistical Models Theory and Practice Cambridge University Press p 26 A simple regression equation has on the right hand side an intercept and an explanatory variable with a slope coefficient A multiple regression e right hand side each with its own slope coefficient Rencher Alvin C Christensen William F 2012 Chapter 10 Multivariate regression Section 10 1 Introduction Methods of Multivariate Analysis Wiley Series in Probability and Statistics vol 709 3rd ed John Wiley amp Sons p 19 ISBN 9781118391679 Hilary L Seal 1967 The historical development of the Gauss linear model Biometrika 54 1 2 1 24 doi 10 1093 biomet 54 1 2 1 JSTOR 2333849 Yan Xin 2009 Linear Regression Analysis Theory and Computing World Scientific pp 1 2 ISBN 9789812834119 Regression analysis is probably one of the oldest topics in mathematical statistics dating back to about two hundred years ago The earliest form of the linear regression was the least squares method which was published by Legendre in 1805 and by Gauss in 1809 Legendre and Gauss both applied the method to the problem of determining from astronomical observations the orbits of bodies about the sun a b Tibshirani Robert 1996 Regression Shrinkage and Selection via the Lasso Journal of the Royal Statistical Society Series B 58 1 267 288 JSTOR 2346178 a b Efron Bradley Hastie Trevor Johnstone Iain Tibshirani Robert 2004 Least Angle Regression The Annals of Statistics 32 2 407 451 arXiv math 0406456 doi 10 1214 009053604000000067 JSTOR 3448465 S2CID 204004121 a b Hawkins Douglas M 1973 On the Investigation of Alternative Regressions by Principal Component Analysis Journal of the Royal Statistical Society Series C 22 3 275 286 doi 10 2307 2346776 JSTOR 2346776 a b Jolliffe Ian T 1982 A Note on the Use of Principal Components in Regression Journal of the Royal Statistical Society Series C 31 3 300 303 doi 10 2307 2348005 JSTOR 2348005 Williams Matt Grajales Carlos Kurkiewicz Dason 2019 11 25 Assumptions of Multiple Regression Correcting Two Misconceptions Practical Assessment Research and Evaluation 18 1 doi 10 7275 55hn wk47 ISSN 1531 7714 Berk Richard A 2007 Regression Analysis A Constructive Critique Criminal Justice Review 32 3 301 302 doi 10 1177 0734016807304871 S2CID 145389362 Hidalgo Bertha Goodman Melody 2012 11 15 Multivariate or Multivariable Regression American Journal of Public Health 103 1 39 40 doi 10 2105 AJPH 2012 300897 ISSN 0090 0036 PMC 3518362 PMID 23153131 Brillinger David R 1977 The Identification of a Particular Nonlinear Time Series System Biometrika 64 3 509 515 doi 10 1093 biomet 64 3 509 JSTOR 2345326 Tsao Min 2022 Group least squares regression for linear models with strongly correlated predictor variables Annals of the Institute of Statistical Mathematics 75 2 233 250 arXiv 1804 02499 doi 10 1007 s10463 022 00841 7 S2CID 237396158 Galton Francis 1886 Regression Towards Mediocrity in Hereditary Stature The Journal of the Anthropological Institute of Great Britain and Ireland 15 246 263 doi 10 2307 2841583 ISSN 0959 5295 JSTOR 2841583 Britzger Daniel 2022 The Linear Template Fit Eur Phys J C 82 8 731 arXiv 2112 01548 Bibcode 2022EPJC 82 731B doi 10 1140 epjc s10052 022 10581 w S2CID 244896511 Lange Kenneth L Little Roderick J A Taylor Jeremy M G 1989 Robust Statistical Modeling Using the t Distribution PDF Journal of the American Statistical Association 84 408 881 896 doi 10 2307 2290063 JSTOR 2290063 Swindel Benee F 1981 Geometry of Ridge Regression Illustrated The American Statistician 35 1 12 15 doi 10 2307 2683577 JSTOR 2683577 Draper Norman R van Nostrand R Craig 1979 Ridge Regression and James Stein Estimation Review and Comments Technometrics 21 4 451 466 doi 10 2307 1268284 JSTOR 1268284 Hoerl Arthur E Kennard Robert W Hoerl Roger W 1985 Practical Use of Ridge Regression A Challenge Met Journal of the Royal Statistical Society Series C 34 2 114 120 JSTOR 2347363 Narula Subhash C Wellington John F 1982 The Minimum Sum of Absolute Errors Regression A State of the Art Survey International Statistical Review 50 3 317 326 doi 10 2307 1402501 JSTOR 1402501 Stone C J 1975 Adaptive maximum likelihood estimators of a location parameter The Annals of Statistics 3 2 267 284 doi 10 1214 aos 1176343056 JSTOR 2958945 Goldstein H 1986 Multilevel Mixed Linear Model Analysis Using Iterative Generalized Least Squares Biometrika 73 1 43 56 doi 10 1093 biomet 73 1 43 JSTOR 2336270 Theil H 1950 A rank invariant method of linear and polynomial regression analysis I II III Nederl Akad Wetensch Proc 53 386 392 521 525 1397 1412 MR 0036489 Sen Pranab Kumar 1968 Estimates of the regression coefficient based on Kendall s tau Journal of the American Statistical Association 63 324 1379 1389 doi 10 2307 2285891 JSTOR 2285891 MR 0258201 Deaton Angus 1992 Understanding Consumption Oxford University Press ISBN 978 0 19 828824 4 a b Krugman Paul R Obstfeld M Melitz Marc J 2012 International Economics Theory and Policy 9th global ed Harlow Pearson ISBN 9780273754091 Laidler David E W 1993 The Demand for Money Theories Evidence and Problems 4th ed New York Harper Collins ISBN 978 0065010985 a b Ehrenberg Smith 2008 Modern Labor Economics 10th international ed London Addison Wesley ISBN 9780321538963 Hoek Gerard Beelen Rob de Hoogh Kees Vienneau Danielle Gulliver John Fischer Paul Briggs David 2008 10 01 A review of land use regression models to assess spatial variation of outdoor air pollution Atmospheric Environment 42 33 7561 7578 doi 10 1016 j atmosenv 2008 05 057 ISSN 1352 2310 Imai Chisato Hashizume Masahiro 2015 A Systematic Review of Methodology Time Series Regression Analysis for Environmental Factors and Infectious Diseases Tropical Medicine and Health 43 1 1 9 doi 10 2149 tmh 2014 21 hdl 10069 35301 Milionis A E Davies T D 1994 09 01 Regression and stochastic models for air pollution I Review comments and suggestions Atmospheric Environment 28 17 2801 2810 doi 10 1016 1352 2310 94 90083 3 ISSN 1352 2310 Linear Regression Machine Learning PDF University of Pittsburgh Stigler Stephen M 1986 The History of Statistics The Measurement of Uncertainty before 1900 Cambridge Harvard ISBN 0 674 40340 1 Sources edit Cohen J Cohen P West S G amp Aiken L S 2003 Applied multiple regression correlation analysis for the behavioral sciences 2nd ed Hillsdale NJ Lawrence Erlbaum Associates Charles Darwin The Variation of Animals and Plants under Domestication 1868 Chapter XIII describes what was known about reversion in Galton s time Darwin uses the term reversion Draper N R Smith H 1998 Applied Regression Analysis 3rd ed John Wiley ISBN 978 0 471 17082 2 Francis Galton Regression Towards Mediocrity in Hereditary Stature Journal of the Anthropological Institute 15 246 263 1886 Facsimile at 1 Robert S Pindyck and Daniel L Rubinfeld 1998 4h ed Econometric Models and Economic Forecasts ch 1 Intro incl appendices on S operators amp derivation of parameter est amp Appendix 4 3 mult regression in matrix form Further reading editPedhazur Elazar J 1982 Multiple regression in behavioral research Explanation and prediction 2nd ed New York Holt Rinehart and Winston ISBN 978 0 03 041760 3 Mathieu Rouaud 2013 Probability Statistics and Estimation Chapter 2 Linear Regression Linear Regression with Error Bars and Nonlinear Regression National Physical Laboratory 1961 Chapter 1 Linear Equations and Matrices Direct Methods Modern Computing Methods Notes on Applied Science Vol 16 2nd ed Her Majesty s Stationery Office External links edit nbsp Wikiversity has learning resources about Linear regression nbsp The Wikibook R Programming has a page on the topic of Linear Models nbsp Wikimedia Commons has media related to Linear regression Least Squares Regression PhET Interactive simulations University of Colorado at Boulder DIY Linear Fit Retrieved from https en wikipedia org w index php title Linear regression amp oldid 1219987221 Trend line, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.