**Home** / **linear regression calculator desmos** / Simultaneous Equation Regression - Bauer College of Business

Simultaneous Equation Regression and Causation

Isolation and Causation

y1 = γ11x1 Variable x1 is the isolated “cause” of adjustments of y1.

y1 = γ11x1+ζ1 There is lack of isolation because ζ1 is an unobserved other “cause.”

Pseudo-isolation: x1 and ζ1 are independent causes.

What if the causes are not independent? For example suppose that x also cause y2 to change and this influences y1, too, as seen below. y1 = γ11x1+β12y2+ζ1 where y2=γ21x1+ζ2.

[pic]

Example: let x1 be the education level of a worker, y1 be the number of on-the-job mistakes and y2 be boredom. Higher education may induce boredom on the job and at the same time directly reduce the number of mistakes, but boredom increases mistakes.

a. What if we omit the intervening variable y2 and estimate y1= γ11* x1+ζ1* using OLS?

plim γ11* = cov(y1,x1)/var(x1)= cov(γ11x1+β12y2+ζ1,x1)/var(x1)

= cov(γ11x1+β12(γ21x1+ζ2) +ζ1,x1)/var(x1)=(γ11+β12γ21)var(x1)/ var(x1)

= γ11 + β12 γ21. [1]

If γ11 = - β12 γ21, then plim γ11*=0. This is a suppressor relationship.

If γ11=0 but β12γ21>0 then plim γ11*>0. This is a confounding relationship.

b. What if we omit a common cause? In the above, suppose we do not measure x1; we might estimate y1 = β12* y2+ζ1*. This means that the error is ζ1*= γ11x1+ζ1.

plim β12*=cov(y1,y2)/var(y2)= cov(γ11x1+β12y2+ζ1,y2)/var(y2)= β12+γ11cov(x1,y2)/var(y2)

= β12+γ11[pic].

If β12=0 but γ11[pic]≠0, then we infer that y2 causes y1 when this is false: a confound.

a. ( Causation does not prove correlation

b. ( Correlation does not prove causation

Thus, simple correlation analysis can neither prove nor disprove causation. However, recent work has shown that complex correlation patterns can establish causation; see notes “Do Wet Lawns Cause the Grass to Grow.”

Timing and Direction of Causation:

Rather than use “=”, use “(” to denote direction of causation.

Truth: yt ( γ xt+ζt and xt(αxt-1+νt. Note that (xt+1-νt+1)/α (xt and hence

yt ( γ xt+ζt((γ/α)xt+1+ζt-(γ/α)νt+1. That is, yt is correlated with xt+1 because of the common cause xt. If we were to ignore the direction of causation, focusing on post hoc, ergo propter hoc, one could easily find that a regression xt+1 = β*yt+ζt+1* has a highly significant β* assuming α/γ(0 and falsely believe that yt causes xt+1. It looks like yt is determined before xt+1 but actually they are determined simultaneously by xt.

Simultaneous Equation Bias

y1 = ay2 + bx1 +ε1, Structural

y2 = αy1 + βx2 +ε2. Equations

[pic]

Solve for the endogenous variables

[pic] Reduced Form Equations

Look at just the first structural equation (a similar analysis holds for the second equation) and estimate it via OLS. Multiply it by y2 and then by x1 to get two normal equations:

y2y1=ay22+by2x1+y2ε1,

x1y1=ax1y2+bx12+x1ε1.

Note: plim x1ε1 = 0, but plim y2ε1=plim[βx2ε1+αbx1ε1+αε12+ε1ε2]/(1-aα)=ασ12/(1-aα)≠0.

OLS estimators of a and b are therefore

[pic].

Focus on [pic]; a similar analysis applies to [pic]. Substituting from the normal equations gives

[pic]

Notice that the terms involving b cancel out. This can be expressed as

[pic].

Making use of the reduced forms, we have that plim[y2x1]=(βx2+αbx1)x1/(1-aα) and plim[y22]=E[y22]=[(βx2+αbx1)2+α2σ12+σ22]/(1-aα)2. Recall, plim xε1= 0 and plim[y2ε1]=ασ12/(1-aα). Using the fact that the plim[g(X)]=g(plim[X]) for a continuous function g(–),

plim[pic]. Simultaneous Equation Bias

For illustration, if α>0 and aα a, so the OLS estimate exceeds the true a. This is because y2 is correlated with ε1. If ε1 had a bump up, it would cause y1 to go up, and this (through the second equation) would cause y2 to go up (if α>0). Hence, OLS would attribute both bumps in y2 and the unobserved ε1 to just the variable y2, and the estimate of the strength of y2, the parameter a, would seem larger than it is in truth.

Numerical example: y1 = 0.1y2 + x1 +ε1, y2 = 2y1 -3x2 +ε2 and suppose that σi2=1. When ε1 goes up by 1 unit, y1 will rise by 1, and from the second equation, y2 will rise by 2 units. This in turn will cause y1 to increase by an additional 0.1(2=0.2 units (this feedback between the two equations would continue, but I will stop here for pedagogical purposes). Empirically, we will have observed y2 going up by 2 units and y1 going up by 1.2 units, which seems to imply that the coefficient “a” has a value like 1.2/2=0.6, rather than the true 0.1. Including all the feedbacks, the value we would expect from OLS is

[pic].

Conclusion: If an endogenous variable is part of a simultaneous equation system with feedback loops between this and the other endogenous variables, than traditional regression will not reveal the truth on average. (Note: if α=0, there is no feedback loop and plim[pic].) That is, the OLS estimators are asymptotically biased estimates of the true coefficients in the equations. We must therefore do something other than OLS to deal with the simultaneous equation bias, and this must take into account the fact that there are other equations determining some of the explanatory variables.

Two Stage Least Squares

The problem with OLS in a simultaneous equation model is that the errors are correlated with the regressors (in the above y2 was correlated with ε1). In creating the estimators of the first equation via OLS, we multiplied the first structural equation by both its regressors (y2 and x1) to get the normal equations:

y2y1=ay22+by2x1+y2ε1,

x1y1=ax1y2+bx12+x1ε1.

Instead of doing this, suppose that we multiplied the first structural equation by the two exogenous variables, x1 and x2:

x2y1=ax2y2+bx2x1+x2ε1

x1y1=ax1y2+bx12+x1ε1.

Notice that plim x2ε1=0 because x2 is exogenous. The variable x2 is an instrumental variable, since it is both i) causally linked to y2 and ii) independent of ε1. Hence if we took the plim of these normal equations, the terms involving errors would drop out and we could solve for estimators of a and b as

[pic].

Again, focus on [pic]and substitute from the new normal equations to get

[pic]Taking the plim of both sides gives

[pic].

This procedure is called two-stage least-squares because it can be accomplished slightly differently. First stage, run an auxiliary regression of the reduced form equation: y2=η1x1+η2x2+θ2. Given the OLS estimates of this reduced form equation coefficients, compute the predicted values of the endogenous variable[pic]. Second stage, run an OLS regression for the structural equation using not the original y2 variables (that are correlated with errors), but the predicted values [pic]from the first stage regression: [pic]. Since the predicted values are just a weighted average of the exogenous variables, both of the explanatory variables in this second stage regression are independent of the errors, and the second stage regression will give an unbiased estimate of “a” and “b.” In essence, we have used a weighted average of the exogenous variables as an instrumental variable, rather than the other exogenous variable (as seen above). In either case, the outcome is the same. Finally, the computation of standard errors of the coefficients should be based upon the standard error of the regression, but this should be calculated from deviations of y1 from the predicted value [pic], not [pic]. Notice that in both the 2SLS coefficients are used, but in the former the observed value of y2 is used while in the latter the first stage predicted value [pic] is used. We need to use the observed value y2 or else the deviation is not a measure of the true error. When you run the second stage regression, the printed standard errors are inappropriately too small.

-----------------------

[1] “plim” is explained in notes on “Convergence in Probability.”

Can you use Desmos Virginia calculator on SOL?