{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 2.4 - Intermediate - Issues in Regression\n", "\n", "COMET Team
*Emrul Hasan, Jonah Heyl, Shiming Wu, William Co,\n", "Jonathan Graves* \n", "2022-12-08\n", "\n", "## Outline\n", "\n", "### Prerequisites\n", "\n", "- Multiple regression\n", "- Simple regression\n", "- Data analysis and introduction\n", "\n", "### Outcomes\n", "\n", "- Understand the origin and meaning of multicollinearity in regression\n", " models\n", "- Perform simple tests for multicollinerarity using VIF\n", "- Be able to demonstrate common methods to fix or resolve collinear\n", " data\n", "- Understand the origin and meaning of heteroskedasticity in\n", " regression models\n", "- Perform a variety of tests for heteroskedasticity\n", "- Compute robust standard errors for regression models\n", "- Understand other techniques for resolving heteroskedasticity in\n", " regression models\n", "\n", "### Notes\n", "\n", "Note that the data in this exercise is provided under the Statistics\n", "Canada Open License: \\> [1](#fn1s)Statistics\n", "Canada, Survey of Financial Security, 2019, 2021. Reproduced and\n", "distributed on an “as is” basis with the permission of Statistics\n", "Canada.Adapted from Statistics Canada, Survey of Financial Security,\n", "2019, 2021. This does not constitute an endorsement by Statistics Canada\n", "of this product.\n", "\n", "> [2](#fn2s)Stargazer package is due to:\n", "> Hlavac, Marek (2018). stargazer: Well-Formatted Regression and Summary\n", "> Statistics Tables. R package version 5.2.2.\n", "> https://CRAN.R-project.org/package=stargazer " ], "id": "6ffbfb1f-aabd-4773-b0df-1ee46e2308dd" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#load the data and set it up\n", "library(car)\n", "library(tidyverse)\n", "library(haven)\n", "library(stargazer)\n", "library(lmtest)\n", "library(sandwich)" ], "id": "4e3db07a-5895-43ec-b185-4e24bcaa9cfe" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "source(\"intermediate_issues_in_regression_functions.r\")\n", "SFS_data <- read_dta(\"../datasets_intermediate/SFS_2019_Eng.dta\")\n", "SFS_data <- clean_up_data(SFS_data) #this function is explained in module 1" ], "id": "feffca14-0aef-4f87-bba8-7257915d5ad7" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "glimpse(SFS_data)" ], "id": "3a9ced44-7a1a-497e-86de-6df17e2cb2c9" }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook, we will explore several important issues in multiple\n", "regression models, and explore how to identify, evaluate, and correct\n", "them where appropriate. It is important to remember that there can be\n", "many other issues that arise in specific regression models; as you learn\n", "more about econometrics and create your own research questions,\n", "different issues will arise. Consider these as “examples” for some of\n", "the most common issues that arise in regression models.\n", "\n", "## Part 1: Multicollinearity\n", "\n", "Multi-collinearity is a surprisingly common issue in applied regression\n", "analysis, where several explanatory variables are correlated to each\n", "other. For example, suppose we are interested at regressing one’s\n", "marriage rate against years of education and annual income. In this\n", "case, the two explanatory variables income and years of education are\n", "highly correlated. It refers to the situation where a variable is\n", "“overdetermined” by the other variables in a model, which will result in\n", "less reliable regression output. For example, if we have a high\n", "coefficient on education. How certain are we that this coefficient was\n", "not the result of having a high annual income as well? Let’s look at\n", "this problem mathematically; in calculating an OLS estimation, you are\n", "estimating a relationship like:\n", "\n", "$$Y_i = \\beta_a + \\beta_1 X_1 + \\epsilon_i$$\n", "\n", "You find the estimates of the coefficients in this model using OLS;\n", "i.e. solving an equation like:\n", "\n", "$$ \\min_{b_0, b_1} \\sum_{i=1}^n(Y_i - b_0 -b_1 X_i)^2 $$\n", "\n", "Under the OLS regression assumptions, this has a unique solution; i.e\n", "you can find unique values for $b_0$ and $b_1$.\n", "\n", "However, what if you wrote an equation like this:\n", "$$\\beta a=\\beta_0+\\beta_1 $$ We can then rewrite as\n", "$$Y_i = \\beta_0 + \\beta_1 + \\beta_2 X_i + \\epsilon_i$$\n", "\n", "This *seems* like it would be fine, but remember what you are doing:\n", "trying to find a *line* of best fit. The problem is that this equation\n", "does not define a unique line; the “intercept” is $\\beta_0 + \\beta_1$.\n", "There are two “parameters” ($\\beta_0, \\beta_1$) for a single\n", "“characteristics” (the intercept). This means that the resulting OLS\n", "problem:\n", "\n", "$$ \\min_{b_0, b_1, b_2} \\sum_{i=1}^n(Y_i - b_0 -b_1 -b_2 X_i)^2 $$\n", "\n", "Does not have a unique solution. In algebraic terms, it means you can\n", "find many representations of a line with two intercept parameters. This\n", "is referred to in econometrics as a lack of **identification**;\n", "multicollinearity is one way that identification can fail in regression\n", "models.\n", "\n", "You can see this in the following example, which fits an OLS estimate of\n", "`wealth` and `income_before_tax` then compares the fit to a regression\n", "with two intercepts. Try changing the values to see what happens.\n", "\n", "> **Note**: Make sure to understand what the example below is doing.\n", "> Notice how the results are exactly the same, no matter what the value\n", "> of `k` is?" ], "id": "1146a025-cdb4-49aa-bb07-32b108fbffa9" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "reg <- lm(wealth ~ income_before_tax, data = SFS_data)\n", "\n", "b_0 <- reg$coef[[1]]\n", "b_1 <- reg$coef[[2]]\n", "\n", "resid1 <- SFS_data$wealth - b_0 - b_1*SFS_data$income_before_tax\n", "\n", "\n", "k <- 90 #change me! \n", "\n", "b_0 = (reg$coef[[1]])/2 - k\n", "b_1 = (reg$coef[[1]])/2 + k \n", "b_2 = reg$coef[[2]]\n", "\n", "resid2 <-SFS_data$wealth - b_0 - b_1 - b_2*SFS_data$income_before_tax\n", "\n", "\n", "\n", "ggplot() + geom_density(aes(x = resid1), color = \"blue\") + xlab(\"Residuals from 1 Variable Model\") + ylab(\"Density\")\n", "ggplot() + geom_density(aes(x = resid2), color = \"red\") + xlab(\"Residuals from 2 Variable Model\") + ylab(\"Density\")" ], "id": "5849b6ef-ff2f-4312-9fe3-c1d8882f4f91" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice how the residuals look *exactly* the same - despite these being\n", "from (purportedly) two different models. This is because they not really\n", "two different models! They *identify* the same model!\n", "\n", "Okay, you’re probably thinking, that makes sense - but just don’t write\n", "down an equation like that. After all, it seems somewhat artificial that\n", "we added an extra intercept term.\n", "\n", "However, multicollinearly can occur with *any* set of variables in the\n", "model; not just the intercept. For example, suppose you have a multiple\n", "regression:\n", "\n", "$$Y_ i = \\beta_0 + \\beta_1 X_{1,i} + \\beta_2 X_{2,i} + \\beta_3 X_{3,i} + \\epsilon_i$$\n", "\n", "What would happen if there was a relationship between $X_1, X_2$ and\n", "$X_3$ like:\n", "\n", "$$X_{1,i} = 0.4 X_{2,i} + 12 X_{3,i}$$\n", "\n", "When, we could then re-write the equation as:\n", "\n", "$$Y_ i = \\beta_0 + \\beta_1 (0.4 X_{2,i} + 12 X_{3,i}) + \\beta_2 X_{2,i} + \\beta_3 X_{3,i} + \\epsilon_i$$\n", "\n", "$$\\implies Y_ i = \\beta_0 + (\\beta_2 + 0.4 \\beta_1) X_{2,i} + (\\beta_3 + 12 \\beta_1)X_{3,i} + \\epsilon_i$$\n", "\n", "The same problem is now occuring, but with $X_2$ and $X_3$: the slope\n", "coefficients depend on a free parameter ($\\beta_1$). You cannot uniquely\n", "find the equation of a line (c.f. plane) with this kind of equation.\n", "\n", "Basically what is happening, is you are trying to solve a system of\n", "equations for 3 variables (or n variables), but only 2 (or n-1) are used\n", "in the equation (are independent). So what would you do, well you would\n", "leave one of the dependent variables out, so you could solve for all of\n", "your variables, this is exactly what R does.\n", "\n", "You can also intuitively see the condition here: multicollinearity\n", "occurs when you can express one variable as a *linear* combination of\n", "the other variables in the model.\n", "\n", "- This is sometimes referred to as **perfect multicollinearity**,\n", " since the variable is *perfectly* expressed as a linear combination\n", " of the other variable.\n", "- The linearity is important because this is a linear model; you can\n", " have similar issues in other models, but it has a special name in\n", " linear regression\n", "\n", "### Perfect Multicollinearity in Models\n", "\n", "In general, most statistical packages (like R) will automatically\n", "detect, warn, and remove perfectly multicollinear variables from a\n", "model; this is because the algorithm they use to solve problems like the\n", "OLS estimation equation detects the problem and avoids a “crash”. This\n", "is fine, from a mathematical perspective - since mathematically the two\n", "results are the same (in a well-defined sense, as we saw above).\n", "\n", "However, from an economic perspective this is very bad - it indicates\n", "that there was a problem with the *model* that you defined in the first\n", "place. Usually, this means one of three things:\n", "\n", "1. You included a set of variables which were, in combination,\n", " identical. For example, including “family size” and then “number of\n", " children” and “number of adults” in a regression\n", "2. You did not understand the data well enough, and variables had less\n", " variation than you thought they did - conditional on the other\n", " variables in the model. For example, maybe you thought people in the\n", " dataset could have both graduate and undergraduate degrees - so\n", " there was variation in “higher than high-school” but that wasn’t\n", " true\n", "3. You wrote down a model which was poorly defined in terms of the\n", " variables. For example, you including all levels of a dummy\n", " variable, or included the same variable measured in two different\n", " units (wages in dollars and wages in 1000s of dollars).\n", "\n", "In all of these cases, you need to go back to your original regression\n", "model and re-evaluate what you are trying to do in order to simplify the\n", "model or correct the error.\n", "\n", "Consider the following regression model, in which we want to study\n", "whether or not there is a penalty for families led by someone who is\n", "younger is the SFS Data:" ], "id": "9b2d5e2d-d43d-4373-8a2d-3fdfa3645879" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "SFS_data <- SFS_data %>%\n", " mutate(ya = case_when(\n", " education == \"Less than high school\" ~ \"Yes\",\n", " education == \"High school\" ~ \"Yes\",\n", " education == \"Non-university post-secondary\" ~ \"No\", #this is for all other cases\n", " TRUE ~ \"No\" #this is for all other cases\n", " )) %>%\n", " mutate(ya = as_factor(ya))\n", "\n", "regression2 <- lm(income_before_tax ~ ya + education , data = SFS_data)\n", "\n", "summary(regression2)" ], "id": "97728159-6f0b-498c-a3c8-0402841df537" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Can you see why the multi-collinearity is occurring here? Try to write\n", "down an equation which points out what the problem is in this\n", "regression - why is it multi-collinear? How could you fix this problem\n", "by changing the model?\n", "\n", "> *Think Deeper*: You will notice, above, that it excluded the\n", "> “University” education. Did it have to exclude that one? Could it have\n", "> excluded another one instead? What do you think?\n", "\n", "### Imperfect Multicollinearity\n", "\n", "A related issue to perfect multicollinearity is “near” (or\n", "**imperfect**) multicollinearity. If you recall from the above, perfect\n", "multicollinearity occurs when you have a relationship like:\n", "\n", "$$X_{1,i} = 0.4 X_{2,i} + 12 X_{3,i}$$\n", "\n", "Notice that in this relationship it holds *for all* values of $i$.\n", "However, what if it held for *nearly* all $i$ instead? In that case, we\n", "would still have a solution to the equation… but there would be a\n", "problem. Let’s look at this in the simple regression case.\n", "\n", "$$Y_i = \\beta_0 + \\beta_1 X_i + \\epsilon_i$$\n", "\n", "Now, let’s suppose that $X_i$ is “almost” collinear with $\\beta_0$. To\n", "be precise, suppose that $X_i = 15$ for $k$-% of the data ($k$ will be\n", "large) and $X_i = 20$ for $(1-k)$-% of the data. This is *almost*\n", "constant, and so it is *almost* collinear with $\\beta_0$ (the constant).\n", "Let’s also make the values of $Y_i$ so that\n", "$Y_i(X_i) = X_i + \\epsilon_i$ (so $\\beta_1 = 1$), and we will set\n", "$\\sigma_Y = 1$\n", "\n", "This implies that (applying some of the formulas from class):\n", "\n", "$$\\beta_1 = \\frac{Cov(X_i,Y_i)}{Var(X_i)} = 1$$\n", "\n", "$$s_b = \\frac{1}{\\sqrt{n-2}}\\sqrt{\\frac{1}{r^2}-1}$$\n", "\n", "$$r = \\frac{\\sigma_X}{\\sigma_Y}$$\n", "\n", "As you can see, when $Var(X_i)$ goes down, $\\sigma_X$ falls, and the\n", "value of $r$ falls; intuitively, when $k$ rises, the variance will go to\n", "zero, which makes $r$ go to zero as well (since there’s no variation).\n", "You can then see that $s_b$ diverges to infinity.\n", "\n", "We can make this more precise. In this model, how does $Var(X_i)$ depend\n", "on $k$? Well, first notice that\n", "$\\bar{X_i} = 15\\cdot k + 20 \\cdot (1-k)$. Then,\n", "\n", "$$Var(X_i) = (X_i - \\bar{X_i})^2 = k (15 - \\bar{X_i})^2 + (1-k)(20 - \\bar{X_i})^2$$\n", "\n", "$$\\implies Var(X_i) = 25[k(1-k)^2 + (1-k)k^2]$$\n", "\n", "Okay, that looks awful - so let’s plot a graph of $s_b$ versus $k$ (when\n", "$n = 1000$):" ], "id": "ac997c92-f8f3-49c7-a91b-b6244fd73501" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "options(repr.plot.width=6,repr.plot.height=4)\n", "\n", "r = 0.01 \n", "\n", "eq = function(k){(1/sqrt(1000-2))*(1/(25*(k*(1-k)^2 + (1-k)*k^2))-1)}\n", "s = seq(0.5, 1.00, by = r)\n", "n = length(s)\n", "\n", "plot(eq(s), type='l', xlab=\"Values of K\", ylab=\"Standard Error\", xaxt = \"n\")\n", "axis(1, at=seq(0, n-1, by = 10), labels=seq(0.5, 1.00, by = 10*r))\n", "\n", "# You will notice that the plot actually diverges to infinity\n", "# Try making R smaller to show this fact!\n", "#Notice the value at 1 increases" ], "id": "8a47159a-7e0a-4504-b7c4-46dccd7aca52" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Why does this happen? The reason actually has to do with *information*.\n", "\n", "When you estimate a regression, you are using the variation in the data\n", "to estimate each of the parameters. As the variation falls, the\n", "estimation gets less and less precise, because you are using less and\n", "less data to make an evaluation. The magnitude of this problem can be\n", "quantified using the **VIF** or **variance inflation factor** for each\n", "of the variables in question.Graphically you can think of regression as\n", "drawing a best fit line through data points. Now if the variance is $0$\n", "in the data, there is just one data point.If you remember from high\n", "school you need two points to draw a line, so with $0$ variance the OLS\n", "problem becomes ill-defined.\n", "\n", "We can calculate this directly in R by using the `vif` function. Let’s\n", "look at the collinearity in our model:" ], "id": "412bb449-378c-4ae8-96ac-989943d4a4af" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "regression2 <- lm(wealth ~ income_before_tax +income_after_tax, data = SFS_data)\n", "\n", "summary(regression2)" ], "id": "2e8cf659-8619-4af5-a9f8-b6e310773802" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cat(\"Variance inflation factor of income after tax on wealth: \",vif(regression2,SFS_data$income_after_tax,SFS_data$wealth),'\\n')\n", "cat(\"Variance inflation factor of income before tax on wealth: \",vif(regression2,SFS_data$income_before_tax,SFS_data$wealth),'\\n')\n", "cat(\"Variance inflation factor of income before tax on income after tax: \",vif(regression2,SFS_data$income_before_tax,SFS_data$income_after_tax),'\\n')" ], "id": "646770dd-ddff-4771-993f-90f088e2948d" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice the extremely large VIF. This would indicate that you have a\n", "problem with collinearity in your data.\n", "\n", "> *Think Deeper*: What happens to the VIF as `k` changes? Why? Can you\n", "> explain?\n", "\n", "There are no “hard” rules for what makes a VIF too large - you should\n", "think about your model holistically, and use it as a way to investigate\n", "whether you have any problems with your model evaluation and analysis.\n", "\n", "## Part 2: Heteroskedasticity\n", "\n", "**Heteroskedasticity** (Het-er-o-sked-as-ti-city) is another common\n", "problem in many economic models. It refers to the situation in which the\n", "distribution of the residuals changes as the explanatory variables\n", "change. Usually, we could visualize this problem by drawing a residual\n", "plot and a fan or cone shape indicates the presence of\n", "heteroskedasticity. For example, consider this regression:" ], "id": "cb4aa401-c211-431c-9bb7-62cb24ba69fa" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "regression3 <- lm(income_before_tax ~ income_after_tax, data = SFS_data)\n", "\n", "ggplot(data = SFS_data, aes(x = as.numeric(income_after_tax), y = as.numeric(regression3$residuals))) + geom_point() + labs(x = \"After-tax income\", y = \"Residuals\")" ], "id": "04e2b921-a63e-4fd1-b61e-8e9141474cfd" }, { "cell_type": "markdown", "metadata": {}, "source": [ "This obviously does not look like a distribution which is unchanging as\n", "market income changes. This is a good “eyeball test” for\n", "heteroskedasticty. Why does heteroskedasticity arise? For many reasons:\n", "\n", "1. It can be a property of the data; it just happens that some values\n", " show more variation, due to the process which creates the data. One\n", " of the most common ways this can arise is where there are several\n", " different economic processes creating the data.\n", "\n", "2. It can be because of an unobserved variable. This is similar to\n", " above; if we can quantify that process in a variable or a\n", " description, we have left it out. This could create bias in our\n", " model, but it will also show up in the standard errors in this way.\n", "\n", "3. It can be because of your model specification. Models, by their very\n", " nature, can be heteroskedastic (or not); we will explore one\n", " important example later in this worksheet.\n", "\n", "4. There are many other reasons, which we won’t get into here.\n", "\n", "Whatever the reason it exists, you need to correct for it - if you\n", "don’t, while your coefficients will be OK, your standard errors will be\n", "incorrect. You can do this in a few ways. The first way is to try to\n", "change your variables that the “transformed” model (a) makes economic\n", "sense, and (b) no longer suffers from heteroskedasticity. For example,\n", "perhaps a *log-log* style model might work here:" ], "id": "aca57a9d-52ba-421e-8808-a925162c8097" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "SFS_data <- SFS_data %>%\n", " filter(income_before_tax > 0) %>% #getting rid of NAs\n", " mutate(lnincome_before_tax = log(income_before_tax))\n", "\n", "SFS_data <- SFS_data %>%\n", " filter(income_after_tax > 0) %>%\n", " mutate(lnincome_after_tax = log(income_after_tax))\n", "\n", "\n", "regression4 <- lm(lnincome_before_tax ~ lnincome_after_tax, data = SFS_data)\n", "\n", "ggplot(data = SFS_data, aes(x = lnincome_before_tax, y = regression4$residuals)) + geom_point() + labs(x = \"Log of before-tax income\", y = \"Residuals\")" ], "id": "49400d3b-9574-4a60-a089-5aebef3c3031" }, { "cell_type": "markdown", "metadata": {}, "source": [ "> *Think Deeper*: Does the errors of this model seem homoskedastic?\n", "\n", "As you can see, that didn’t work out. This is pretty typical: when you\n", "transform a model by changing the variables, what you are really doing\n", "is adjusting how you think the data process should be described so that\n", "it’s no longer heteroskedastic. If you aren’t correct with this, you\n", "won’t fix the problem.\n", "\n", "For example, in a *log-log* model, we are saying “there’s a\n", "multiplicative relationship”… but that probably doesn’t make sense here.\n", "This is one of the reasons why data transformations are not usually a\n", "good way to fix this problem unless you have a very clear idea of what\n", "the transformation *should* be.\n", "\n", "The most robust (no pun intended) way is to simply use standard errors\n", "which are robust to heteroskedasticity. There are actually a number of\n", "different versions of these (which you don’t need to know about), but\n", "they are all called **HC** or **hetereoskedasticity-corrected** standard\n", "errors. In economics, we typically adopt White’s versions of these\n", "(called **HC1** in the literature); these are often referred to in\n", "economics papers as “robust” standard errors (for short).\n", "\n", "This is relatively easy to do in R. Basically, you run your model, as\n", "normal, and then re-test the coefficients to get the correct error using\n", "the `coeftest` command, but specifying which kind of errors you want to\n", "use. Here is an example:" ], "id": "a3d599bc-0667-448f-abea-9e2f02bf3875" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "regression5 <- lm(income_before_tax ~ income_after_tax, data = SFS_data)\n", "\n", "summary(regression5)\n", "\n", "coeftest(regression5, vcov = vcovHC(regression5, type = \"HC1\"))" ], "id": "918e3f14-c5dd-4354-a510-51357e9edf78" }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, the standard errors (and significance tests) give\n", "different results; in particular, the HC1 errors are almost 10-times\n", "larger than the uncorrected errors. In this particular model, it didn’t\n", "make much of a different to the conclusions (even though it changed the\n", "$t$ statistics a lot), but it can sometimes change your results.\n", "\n", "### Testing for Heteroskedasticity\n", "\n", "You can also perform some formal tests for heteroskedasticity. We\n", "learned about two of them in class:\n", "\n", "1. White’s Test, which relies on performing a regression using the\n", " residuals\n", "2. Breusch-Pagan Test, which also relies on performing a simpler\n", " regression using the residuals\n", "\n", "Both of them are, conceptually, very similar. Let’s try (2) for the\n", "above regression:" ], "id": "cae1f7b5-6514-41a1-82df-e4cd5c35f52b" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "regression2 <- lm(income_before_tax ~ income_after_tax, data = SFS_data) \n", "\n", "SFS_data$resid_sq <- (regression2$residuals)^2 #get the residuals then square it\n", "\n", "regression3 <- lm(resid_sq ~ income_after_tax, data = SFS_data) #make the residuals a function of X\n", "\n", "summary(regression3)" ], "id": "bacd6019-bef5-47f9-9f4b-aecc0aa82964" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Inspecting the results, we can see from the $F$-statistic that we can\n", "strongly reject the assumption of homoskedasticity. This is denoted by\n", "the 3 asterisks. This data looks like it’s heteroskedastic, because the\n", "residuals can be predicted using the explanatory variables.\n", "\n", "There is one very important note:\n", "\n", "- If you **fail** one of these tests, it implies that your data is\n", " heteroskedastic\n", "- If you **pass** one of these tests, it *does not* imply that your\n", " data is homoskedastic (i.e. not heteroskedastic)\n", "\n", "This is because these are statistical tests, and the null hypothesis is\n", "“not heteroskedastic”. Failing to reject the null does not mean that the\n", "null hypothesis is correct - it just means that you can’t rule it out.\n", "This is one of the reasons many economists recommend that you *always*\n", "use robust standard errors unless you have a really compelling reason to\n", "believe otherwise.\n", "\n", "### Linear Probability Models\n", "\n", "How can a model naturally have heteroskedastic standard errors? It turns\n", "out that many common, and important, models have this issue. In\n", "particular, the **linear probability** model has this problem. If you\n", "recall, a linear probability model is a linear regression in which the\n", "dependent variable is a dummy. For example:\n", "\n", "$$D_i = \\beta_0 + \\beta_1 X_{1,i} + \\beta_2 X_{2,i} + \\epsilon_i$$\n", "\n", "These model are quite useful because the coefficients have the\n", "interpretation as being the change in the probability of the dummy\n", "condition occurring. For example, we previously regressed `gender` (of\n", "male or female) in these models to investigate the wealth gap. However,\n", "this can easily cause a problem when estimated using OLS - the value of\n", "$D_i$ must be 0 or 1, and the fitted values (which are probabilities)\n", "must be between 0 and 1.\n", "\n", "However, *nothing* in the OLS model forces this to be true. If you\n", "estimate a value for $\\beta_1$, if you have an $X_{1,i}$ that is high or\n", "low enough, the fitted values will be above or below 1 or 0\n", "(respectively). This implies that *mechanically* you have\n", "heteroskedasticity because high or low values of the explanatory\n", "variables will ALWAYS fit worse than intermediate values. For example,\n", "let’s look at the fitted values from this regression:" ], "id": "d27d6604-6b34-4a71-ac9d-2207c6c4a666" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "SFS_data <- SFS_data %>%\n", " mutate(M_F = case_when(\n", " gender == \"Male\" ~ 0,\n", " gender == \"Female\" ~ 1\n", " ))\n", "\n", "SFS_data <- SFS_data[complete.cases(SFS_data$gender,SFS_data$income_before_tax), ]\n", "SFS_data$gender <- as.numeric(SFS_data$gender)\n", "SFS_data$income_before_tax <- as.numeric(SFS_data$income_before_tax)\n", "\n", "\n", "regression6 <- lm(gender ~ income_before_tax, data = SFS_data)\n", "\n", "SFS_data$fitted <- predict(regression6, SFS_data)\n", "\n", "summary(regression6)\n", "\n", "ggplot(data = SFS_data, aes(x = as.numeric(income_before_tax), y = fitted)) + geom_point() + labs(x = \"before tax income\", y = \"Predicted Probability\")" ], "id": "41718286-0f7e-4e78-9c67-ac0808a6d8a2" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice how that as $y$ gets larger as the fitted value drops. If someone\n", "has an income of over 1 million dollars, they would be predicted to have\n", "a negative probability of being a female - which is impossible.\n", "\n", "This is why you *always* must use robust standard errors in these\n", "models - even if a test says otherwise. Let’s think about what is\n", "happening here, well remember the example of imperfect collinearity,\n", "we’re the was $1-k$ chance of $x$ being 15 and k of $x$ being 20. Now\n", "remember x was co-linear to $\\beta_0$, and this caused large standard\n", "errors. In this scenario the probability of a someone being female given\n", "they earn over a million dollars a year is very small. This because few\n", "female lead has household earn over a million dollars a year as a\n", "percent of the total households earning over a million dollars a year.\n", "\n", "## Part 3: Exercises\n", "\n", "In these exercises, you will get some hands-on experience with testing\n", "and correcting for heteroskedasticity and multicollinearity. You will\n", "also start to think about the mechanics and uses of non-linear\n", "regressions.\n", "\n", "### Activity 1\n", "\n", "Multicollinearity may seem to be an abstract concept, so let’s explore\n", "this issue with a practical example.\n", "\n", "Suppose that we are looking to explore the relationship between a\n", "families income the gender of the major earner. For instance, we want to\n", "know whether families with higher incomes in Canada are more likely to\n", "be male. Recall that we have two measures of income: `income_before_tax`\n", "and `income_after_tax`. Both measures of income are informative:\n", "`income_before_tax` refers to gross annual income (before taxes) that\n", "employers pay to employees; `income_after_tax` refers to net income\n", "after taxes have been deducted.\n", "\n", "Since they are both good measures of income, we decide to put them both\n", "in our regression:\n", "\n", "$$M_F = \\beta_0 + \\beta_1 I_{bi} + \\beta_2 I_{ai} + \\epsilon_i$$\n", "\n", "$M_F$ denotes the dummy variable for whether the person is male or\n", "female, $I_{ai}$ denotes income after taxes, and $I_{bi}$ denotes income\n", "before taxes.\n", "\n", "### Short Answer 1\n", "\n", "Prompt: What concern should we have about this regression\n", "equation? Explain your intuition." ], "id": "a4dcf2f9-5944-4c0a-bcf8-df8d14bef902" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "answer_1 <- #fill in your short answer" ], "id": "bf27c024-5678-4694-91bb-4a09ce602362" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before we continue, let’s reduce the sample size of our data set to 200\n", "observations. We will also revert `gender` into a numeric variable:" ], "id": "1f128550-29e3-48d5-9107-33147322fa48" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Run this!\n", "SFS_data200 <- head(SFS_data, 200) %>%\n", " mutate(M_F = as.numeric(gender)) #everyone in the first 200 observations as, male or female" ], "id": "6b9c68b3-a16c-4b49-9b84-b33026f7fb6e" }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Activity 2\n", "\n", "Run the regression described above.\n", "\n", "Tested Objects: `reg1`." ], "id": "8145fb8a-50e7-446d-bbbd-5af5a1001e69" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Quiz 1\n", "\n", "reg1 <- lm(???, data = SFS_data200) #fill me in\n", "\n", "summary(reg1)\n", "\n", "\n", "test_1()" ], "id": "4cc9cac4-4701-4ded-9dd8-e31e71c1e5a1" }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Short Answer 2\n", "\n", "Prompt: What do you notice about the characteristics of the\n", "estimated regression? Does anything point to your concern being valid?\n", "\n", "Answer here in red" ], "id": "6abb3cc9-f748-48cd-95e8-45d82aace1a2" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "answer_2 <- #fill in your short answer" ], "id": "d3716ee0-3605-4d91-b571-dc58982c1e4b" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let’s suppose we drop 50 more observations:" ], "id": "43d06344-8054-4c17-bbc8-256b9d68dd43" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Run this!\n", "SFS_data150 <- head(SFS_data200, 150)" ], "id": "2a19eb3a-6d47-4c13-b9d7-451628fd3b96" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run the regression model again and compare it with the previous\n", "regression.\n", "\n", "Tested Objects: `reg2`." ], "id": "2bffc75a-cc0d-4a2a-b32c-33ee9babeb17" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Quiz 2\n", "\n", "\n", "reg2 <- lm(???) # fill me in\n", "\n", "summary(reg2)\n", "\n", "\n", "test_2() " ], "id": "86e2ded7-cdba-47bb-a71c-d586c3b3337e" }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Short Answer 3\n", "\n", "Prompt: What happened to the regression estimates when we\n", "dropped 50 observations? Does this point to your concern being valid?\n", "\n", "Answer here in red" ], "id": "1b27d4db-38a9-47a7-8276-8066a45bf783" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "answer_3 <- #fill in your short answer" ], "id": "ba29a6a7-6d74-4f35-b778-9b60d31f8e5d" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, increase the sample size back to its full size and run the\n", "regression once again.\n", "\n", "Tested Objects: `reg3`" ], "id": "e405d245-bb74-4105-859c-cd93005a99e3" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Quiz 3\n", "SFS_data <- SFS_data[complete.cases(SFS_data$income_after_tax), ] #do not modify this code\n", "SFS_data$income_after_tax <- as.numeric(SFS_data$income_after_tax) #do not modify this code\n", "\n", "reg3 <- lm(???) #fill me in\n", "\n", "summary(reg3)\n", "\n", "test_3() " ], "id": "b122635e-3349-4d12-8f84-a7abed2b2847" }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Short Answer 5\n", "\n", "Prompt: Did this change eliminate the concern? How do you know?\n", "\n", "Answer here in red " ], "id": "88324565-a870-478d-a3f9-375747072bea" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "answer_5 <- #fill in your short answer" ], "id": "f0bb913e-8068-4e7d-92bf-f88762c5acef" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Heteroskedasticity is another issue that researchers frequently deal\n", "with when they estimate regression models. Consider the following\n", "regression model:\n", "\n", "$$I_i = \\alpha_0 + \\alpha_1 E_i + \\alpha_2 G_i + \\epsilon_i$$\n", "\n", "$I_i$ denotes before tax income, $E_i$ is level of education, $D_i$ is a\n", "dummy variable for being female.\n", "\n", "### Short Answer 6\n", "\n", "Prompt: Should we be concerned about heteroskedasticity in this\n", "model? If so, what is the potential source of heteroskedasticity, and\n", "what do we suspect to be the relationship between the regressor and the\n", "error term?\n", "\n", "Answer here in red" ], "id": "a300c410-3645-47c6-9188-ee535d042318" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "answer_6 <- #fill in your short answer" ], "id": "7dabd005-875e-43c9-8252-767d7a670634" }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Short Answer 7\n", "\n", "Prompt: If we suppose that heteroskedasticity is a problem in\n", "this regression, what consequences will this have for our regression\n", "estimates?\n", "\n", "Answer here in red \n", "\n", "#short answer template" ], "id": "855a1173-863f-49d5-9f1f-788fc1367eef" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "answer_7 <- #fill in your short answer" ], "id": "2c3cca5f-842a-4e6f-9cf5-9f4923bbd76e" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run the regression below, and graph the residuals against the level of\n", "schooling.\n", "\n", "Tested Objects: `reg5`. The graph will be graded manually." ], "id": "71c6ccc4-4a03-443f-b450-acc4ef18db98" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Run the regression\n", "reg5 <- lm(income_before_tax ~ education, data = SFS_data)" ], "id": "3588ba07-d8a1-4b86-83cb-2dcf02437196" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Quiz 5\n", "\n", "resiplot <- ggplot(reg5, aes(x = ???, y = .resid)) + xlab(\"Education Level\") + ylab(\"Income (Residuals)\")\n", "resiplot + geom_point() + geom_hline(yintercept = 0) + scale_x_discrete(guide = guide_axis(n.dodge=3))\n", "x <- ???" ], "id": "34465549-835c-4bba-898f-6adb70a79b53" }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Short Answer 8\n", "\n", "Prompt: Describe the relationship between education level and\n", "the residuals in the graph above. What does the graph tell us about the\n", "presence and nature of heteroskedasticity in the regression model?\n", "\n", "Answer here in red: " ], "id": "fcdd58a0-a66f-42b2-9ffb-fd8eba0f8ee0" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "answer_8 <- #fill in your short answer" ], "id": "172903e7-532c-4d50-874d-8789381cb4f1" }, { "cell_type": "markdown", "metadata": {}, "source": [ "To test for heteroskedasticity formally, let’s perform the White Test.\n", "First, store the residuals from the previous regression in `SFS_data`.\n", "\n", " Tested Objects: `SFS_data` (checks to see that residuals were added\n", "properly)." ], "id": "81ecebf7-1fe1-457c-aeda-587703c95dd2" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Quiz 6\n", "\n", "SFS_data <- mutate(SFS_data, resid = ???)\n", "\n", "head(SFS_data$resid, 10) #Displays the residuals in the dataframe\n", "\n", "test_6() " ], "id": "3e9a58bf-9cd9-4bfc-a149-520c0f9a3489" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, generate a variable for the squared residuals, then run the\n", "required auxiliary regression.\n", "\n", "Tested Objects: `WT` (the auxiliary regression)." ], "id": "ddca2a45-44ec-4929-844a-25f1090d3a40" }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "#Quiz 7\n", "model=lm(income_before_tax~gender^2 + gender + education^2 +education+ education*gender, data = SFS_data) # fill me in\n", "resid = reg5$residuals\n", "rsq=(resid)^2" ], "id": "7b5bbe35-f189-41de-a7ff-1e778b350c4a" }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "SFS_data$rsq=rsq\n", "\n", "WT <- lm(rsq~ ???, data =SFS_data) # fill me in\n", "\n", "summary(WT)\n", "\n", "\n", "test_7() " ], "id": "ea76e27e-1874-42ab-bbb6-4c0e7986120f" }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Short Answer 9\n", "\n", "Prompt: What does the white test suggest?\n", "\n", "### Activity 3\n", "\n", "Finish filling in this table:\n", "\n", "| Formal Issue Name | Problem | Meaning | Test | Solution |\n", "|------|--------------------|---------------|---------------|------------------|\n", "| ??? | Incorrect Standard errors, which can lead to incorrect confidence intervals etc | The distribution of residuals is not constant | White’s Test and Breusch-Pagan: `bptest()` | Add additional factors to regression or use robust standard errors |\n", "| Perfect Collinearity | ??? | One variable in the regression is a linear function of another variable in the regression | Collinearity test on the model, `ols_vif_tol(model)` | ??? |\n", "| Imperfect Collinearity | The model will have very large standard errors. R may need to omit a variable | One variable can almost be fully predicted with a linear function of another variable in the model | ??? | Omit one of the collinear variables, try using more data, or consider transformations (e.g., logarithms) |\n", "\n", "[1](#fn1s)Provided under the Statistics Canada\n", "Open License. Adapted from Statistics Canada, Statistics Canada Open\n", "License (Public) Adapted from Statistics Canada, 2021 Census Public Use\n", "Microdata File (PUMF). This does not constitute an endorsement by\n", "Statistics Canada of this product." ], "id": "a3bf407b-6cad-4421-a0a6-82c91d367846" } ], "nbformat": 4, "nbformat_minor": 5, "metadata": { "kernelspec": { "name": "ir", "display_name": "R", "language": "r" } } }