Harman Patil (Editor)

Simple linear regression

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Simple linear regression

In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the x and y coordinates in a Cartesian coordinate system) and finds a linear function (a non-vertical straight line) that, as accurately as possible, predicts the dependent variable values as a function of the independent variables. The adjective simple refers to the fact that the outcome variable is related to a single predictor.

Contents

It is common to make the additional hypothesis that the ordinary least squares method should be used to minimize the residuals. Under this hypothesis, the accuracy of a line through the sample points is measured by the sum of squared residuals (vertical distances between the points of the data set and the fitted line), and the goal is to make this sum as small as possible. Other regression methods that can be used in place of ordinary least squares include least absolute deviations (minimizing the sum of absolute values of residuals) and the Theil–Sen estimator (which chooses a line whose slope is the median of the slopes determined by pairs of sample points). Deming regression (total least squares) also finds a line that fits a set of two-dimensional sample points, but (unlike ordinary least squares, least absolute deviations, and median slope regression) it is not really an instance of simple linear regression, because it does not separate the coordinates into one dependent and one independent variable and could potentially return a vertical line as its fit.

The remainder of the article assumes an ordinary least squares regression. In this case, the slope of the fitted line is equal to the correlation between y and x corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that it passes through the center of mass (x, y) of the data points.

Fitting the regression line

Suppose there are n data points {(xi, yi), i = 1, ..., n}. The function that describes x and y is:

y i = α + β x i + ε i .

The goal is to find the equation of the straight line

y = α + β x ,

which would provide a "best" fit for the data points. Here the "best" will be understood as in the least-squares approach: a line that minimizes the sum of squared residuals of the linear regression model. In other words, α (the y-intercept) and β (the slope) solve the following minimization problem:

Find  min α , β Q ( α , β ) , for  Q ( α , β ) = i = 1 n ε i 2 = i = 1 n ( y i α β x i ) 2  

By simply expanding to get a quadratic expression in α and β, it can be shown that the values of α and β that minimize the objective function Q are

β ^ = i = 1 n ( x i x ¯ ) ( y i y ¯ ) i = 1 n ( x i x ¯ ) 2 = i = 1 n ( x i y i x i y ¯ x ¯ y i + x ¯ y ¯ ) i = 1 n ( x i 2 2 x i x ¯ + x ¯ 2 ) = i = 1 n ( x i y i ) y ¯ i = 1 n x i x ¯ i = 1 n y i + n x ¯ y ¯ i = 1 n ( x i 2 ) 2 x ¯ i = 1 n x i + n x ¯ 2 = 1 n i = 1 n x i y i x ¯ y ¯ 1 n i = 1 n x i 2 x ¯ 2 = x y ¯ x ¯ y ¯ x 2 ¯ x ¯ 2 = Cov [ x , y ] Var [ x ] = r x y s y s x , α ^ = y ¯ β ^ x ¯ ,

where rxy is the sample correlation coefficient between x and y; and sx and sy are the sample standard deviation of x and y. A horizontal bar over a quantity indicates the average value of that quantity. For example:

x y ¯ = 1 n i = 1 n x i y i .

Substituting the above expressions for α ^ and β ^ into

f = α ^ + β ^ x ,

yields

f y ¯ s y = r x y x x ¯ s x

This shows that rxy is the slope of the regression line of the standardized data points (and that this line passes through the origin).

It is sometimes useful to calculate rxy from the data independently using this equation:

r x y = x y ¯ x ¯ y ¯ ( x 2 ¯ x ¯ 2 ) ( y 2 ¯ y ¯ 2 )

The coefficient of determination (R squared) is equal to r x y 2 when the model is linear with a single independent variable. See sample correlation coefficient for additional details.

Linear regression without the intercept term

Sometimes it is appropriate to force the regression line to pass through the origin, because x and y are assumed to be proportional. For the model without the intercept term, y = βx, the OLS estimator for β simplifies to

β ^ = i = 1 n x i y i i = 1 n x i 2 = x y ¯ x 2 ¯

Substituting (xh, yk) in place of (x, y) gives the regression through (h, k):

β ^ = ( x h ) ( y k ) ¯ ( x h ) 2 ¯ = x y ¯ + k x ¯ h y ¯ h k x 2 ¯ 2 h x ¯ + h 2 = x y ¯ x ¯ y ¯ + ( x ¯ h ) ( y ¯ k ) x 2 ¯ x ¯ 2 + ( x ¯ h ) 2 = Cov [ x , y ] + ( x ¯ h ) ( y ¯ k ) Var [ x ] + ( x ¯ h ) 2

The last form above demonstrates how moving the line away from the center of mass of the data points affects the slope.

Model-cased properties

Description of the statistical properties of estimators from the simple linear regression estimates requires the use of a statistical model. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such as inhomogeneity, but this is discussed elsewhere.

Unbiasedness

The estimators α ^ and β ^ are unbiased. This requires that we interpret the estimators as random variables and so we have to assume that, for each value of x, the corresponding value of y is generated as a mean response α + βx plus an additional random variable ε called the error term. This error term has to be equal to zero on average, for each value of x. Under such interpretation, the least-squares estimators α ^ and β ^ will themselves be random variables, and they will unbiasedly estimate the "true values" α and β.

Confidence intervals

The formulas given in the previous section allow one to calculate the point estimates of α and β — that is, the coefficients of the regression line for the given set of data. However, those formulas don't tell us how precise the estimates are, i.e., how much the estimators α ^ and β ^ vary from sample to sample for the specified sample size. Confidence intervals were devised to give a plausible set of values to the estimates one might have if one repeated the experiment a very large number of times.

The standard method of constructing confidence intervals for linear regression coefficients relies on the normality assumption, which is justified if either:

  1. the errors in the regression are normally distributed (the so-called classic regression assumption), or
  2. the number of observations n is sufficiently large, in which case the estimator is approximately normally distributed.

The latter case is justified by the central limit theorem.

Normality assumption

Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with mean β and variance σ 2 / ( x i x ¯ ) 2 , where σ2 is the variance of the error terms (see Proofs involving ordinary least squares). At the same time the sum of squared residuals Q is distributed proportionally to χ2 with n − 2 degrees of freedom, and independently from β ^ . This allows us to construct a t-statistic

t = β ^ β s β ^     t n 2 ,

where

s β ^ = 1 n 2 i = 1 n ε ^ i 2 i = 1 n ( x i x ¯ ) 2

is the standard error of the estimator β ^ .

This t-statistic has a Student's t-distribution with n − 2 degrees of freedom. Using it we can construct a confidence interval for β:

β [ β ^ s β ^ t n 2 ,   β ^ + s β ^ t n 2 ] ,

at confidence level (1 − γ), where t n 2 is the ( 1 γ 2 ) -th quantile of the tn−2 distribution. For example, if γ = 0.05 then the confidence level is 95%.

Similarly, the confidence interval for the intercept coefficient α is given by

α [ α ^ s α ^ t n 2 ,   α ^ + s α ^ t n 2 ] ,

at confidence level (1 − γ), where

s α ^ = s β ^ 1 n i = 1 n x i 2 = 1 n ( n 2 ) ( j = 1 n ε ^ j 2 ) i = 1 n x i 2 i = 1 n ( x i x ¯ ) 2

The confidence intervals for α and β give us the general idea where these regression coefficients are most likely to be. For example, in the Okun's law regression shown at the beginning of the article the point estimates are

α ^ = 0.859 , β ^ = 1.817.

The 95% confidence intervals for these estimates are

α [ 0.76 , 0.96 ] , β [ 2.06 , 1.58 ] .

In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shown that at confidence level (1 − γ) the confidence band has hyperbolic form given by the equation

y ^ | x = ξ [ α ^ + β ^ ξ ± t n 2 ( 1 n 2 ε ^ i 2 ) ( 1 n + ( ξ x ¯ ) 2 ( x i x ¯ ) 2 ) ] .

Asymptotic assumption

The alternative second assumption states that when the number of points in the dataset is "large enough", the law of large numbers and the central limit theorem become applicable, and then the distribution of the estimators is approximately normal. Under this assumption all formulas derived in the previous section remain valid, with the only exception that the quantile t*n−2 of Student's t distribution is replaced with the quantile q* of the standard normal distribution. Occasionally the fraction 1/n−2 is replaced with 1/n. When n is large such a change does not alter the results appreciably.

Numerical example

This example concerns the data set from the ordinary least squares article. This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. Although the OLS article argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead.

There are n = 15 points in this data set. Hand calculations would be started by finding the following five sums:

S x = x i = 24.76 , S y = y i = 931.17 S x x = x i 2 = 41.0532 , S x y = x i y i = 1548.2453 , S y y = y i 2 = 58498.5439

These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors.

β ^ = n S x y S x S y n S x x S x 2 = 61.272 α ^ = 1 n S y β ^ 1 n S x = 39.062 s ε 2 = 1 n ( n 2 ) [ n S y y S y 2 β ^ 2 ( n S x x S x 2 ) ] = 0.5762 s β ^ 2 = n s ε 2 n S x x S x 2 = 3.1539 s α ^ 2 = s β ^ 2 1 n S x x = 8.63185

The 0.975 quantile of Student's t-distribution with 13 degrees of freedom is t*13 = 2.1604, and thus the 95% confidence intervals for α and β are

α [ α ^ t 13 s α ] = [ 45.4 ,   32.7 ] β [ β ^ t 13 s β ] = [ 57.4 ,   65.1 ]

The product-moment correlation coefficient might also be calculated:

r ^ = n S x y S x S y ( n S x x S x 2 ) ( n S y y S y 2 ) = 0.9945

This example also demonstrates that sophisticated calculations will not overcome the use of badly prepared data. The heights were originally given in inches, and have been converted to the nearest centimetre. Since the conversion factor is one inch to 2.54 cm, this is not a correct conversion. The original inches can be recovered by Round(x/0.0254) and then re-converted to metric: if this is done, the results become

β ^ = 61.6746 , α ^ = 39.7468.

Thus a seemingly small variation in the data has a real effect.

Derivation of simple regression estimators

We look for α ^ and β ^ that minimize the sum of squared errors (SSE):

min α ^ , β ^ SSE ( α ^ , β ^ ) min α ^ , β ^ i = 1 n ( y i α ^ β ^ x i ) 2

To find a minimum take partial derivatives with respect to α ^ and β ^

α ^ ( SSE ( α ^ , β ^ ) ) = 2 i = 1 n ( y i α ^ β ^ x i ) = 0 i = 1 n ( y i α ^ β ^ x i ) = 0 i = 1 n y i = i = 1 n α ^ + β ^ i = 1 n x i i = 1 n y i = n α ^ + β ^ i = 1 n x i 1 n i = 1 n y i = α ^ + 1 n β ^ i = 1 n x i y ¯ = α ^ + β ^ x ¯

Before taking partial derivative with respect to β ^ , substitute the previous result for α ^ .

min α ^ , β ^ i = 1 n [ y i ( y ¯ β ^ x ¯ ) β ^ x i ] 2 = min α ^ , β ^ i = 1 n [ ( y i y ¯ ) β ^ ( x i x ¯ ) ] 2

Now, take the derivative with respect to β ^ :

β ^ ( SSE ( α ^ , β ^ ) ) = 2 i = 1 n [ ( y i y ¯ ) β ^ ( x i x ¯ ) ] ( x i x ¯ ) = 0 i = 1 n ( y i y ¯ ) ( x i x ¯ ) β ^ i = 1 n ( x i x ¯ ) 2 = 0 β ^ = i = 1 n ( y i y ¯ ) ( x i x ¯ ) i = 1 n ( x i x ¯ ) 2 = Cov ( x , y ) Var ( x )

And finally substitute β ^ to determine α ^

α ^ = y ¯ β ^ x ¯

References

Simple linear regression Wikipedia