#Clear the memory from any pre-existing objects
rm(list=ls())
# loading in our packages
library(tidyverse) #This includes ggplot2!
library(haven)
library(IRdisplay)
#Open the dataset
<- read_dta("../econ490-r/fake_data.dta") fake_data
13 - Good Regression Practice
Prerequisites
- Importing data into R.
- Creating new variables in R.
- Running OLS regressions.
Learning Outcomes
- Identify and correct for outliers by trimming or winsorizing the dependent variable.
- Identify and correct for the problem of multicollinearity.
- Identify and correct for the problem of heteroskedasticity.
- Identify and correct for the problem of non-linearity.
13.1 Dealing with Outliers
Imagine that we have constructed a dependent variable which contains the earnings growth of individual workers and we see that some worker’s earnings increased by more than 400%. We might wonder if this massive change is just a coding error made by the statisticians that produced the data set. Even without that type of error, though, we might worry that the earnings growth of a small number of observations are driving the results of our analysis. If this is the case, we are producing an inaccurate analysis based on results that are not associated with the majority of our observations.
The standard practice in these cases is to either winsorize or trim the subset of observations that are used in that regression. Both practices remove the outlier values in the dependent variable to allow us to produce a more accurate empirical analysis.
13.1.1 Winsorizing a Dependent Variable
Winsorizing is the process of limiting extreme values in the dependent variable to reduce the effect of (possibly erroneous) outliers. It consists of replacing values below the \(a\)th percentile by that percentile’s value, and values above the \(b\)th percentile by that percentile’s value. Consider the following example using our fake data set:
quantile(fake_data$earnings, probs = c(0.01, 0.99))
min(fake_data$earnings)
max(fake_data$earnings)
From the summary statistics above, we can see that that the income earned by the individual at the 1st percentile is 2,831.03 and that the lowest earner in the data set earned 8.88.
We can also see that income earned by the individual at the 99th percentile is only 607,140.32 and that the highest earner in the data earned over 60 millions!
These facts suggest to us that there are large outliers in our dependent variable.
We want to get rid of these outliers by winsorizing our data set. What that means is replacing the earnings of all observations below the 1st percentile by exactly the earnings of the individual at the 1st percentile, and replacing the earnings of all observations above the 99th percentile by exactly the earnings of the individual at the 99th percentile.
To winsorize this data, we do the following 3 step process:
- We create a new variable called earnings_winsor which is identical to our earnings variable using
mutate
. We choose to store the winsorized version of the dependent variable in a different variable so that we don’t overwrite the original data set. - If earnings are smaller than the 1st percentile, we replace the values of earnings_winsor with the earnings of the individual at the 1st percentile:
(quantile(fake_data$earnings, probs = 0.01) = 2831)
. - If earnings are larger than the 99th percentile, we replace the values of earnings_winsor with the earnings of the individual at the 99th percentile:
(quantile(fake_data$earnings, probs = 0.99) = 607140 )
.
The values of this new variable will be created using the command ifelse()
. If earnings are less than 2831, the value of earnings_winsor is replaced by 2831 using this command.
We do this below:
<- fake_data %>%
fake_data mutate(earnings_winsor = ifelse(earnings<2831, 2831, ifelse(earnings>607140, 607140, earnings)))
Now we will use this new dependent variable in our regression analysis. If the outliers were not creating problems, there will be no change in the results. If they were creating problems, those problems will now be fixed.
Let’s take a look at this by first running the regression from Module 10 with the original earning variable.
#Generate the log of earnings_winsor
<- fake_data %>%
fake_data mutate(log_earnings_winsor = log(earnings_winsor))
#Run the regression from Module 10
lm(data=fake_data, log(earnings) ~ as.factor(sex))
#Run the previous regression with the log of earnings_winsor
lm(data=fake_data, log_earnings_winsor ~ as.factor(sex))
Do you think that in this case the outliers were having a significant impact before being winsorized?
13.1.2 Trimming a Dependent Variable
Trimming consists of replacing both values below the \(a\)th percentile and values above the \(b\) percentile by a missing value. This is done to exclude these outliers from regression, since R automatically excluedes missing (NA
) observations in the lm
command.
Below, we look at the commands for trimming a variable. Notice that the steps are quite similar to when we winsorized the same variable. Here, we are directly creating the log of trimmed earnings in one step. Don’t forget to create a new variable to avoid overwriting our original variable!
<- fake_data %>%
fake_data mutate(log_earnings_trimmed = ifelse(earnings<2831 , NA, ifelse( earnings > 607140 , NA, log(earnings))))
And here is the result of the regression with the new dependent variable:
lm(data=fake_data, log_earnings_trimmed ~ as.factor(sex))
13.2 Multicollinearity
If two variables are linear combinations of one another they are multicollinear. Ultimately, R does not allow us to include two variables in a regression that are perfect linear combinations of one another, such as a constant or a dummy variable for male and a dummy for female (since female = 1 - male). In all of the regressions above, we see that one of those variables was dropped from the regression “because of collinearity”.
<- fake_data %>%
fake_data mutate(male = case_when(sex == 'M' ~ 1, sex == 'F' ~ 0)) %>%
mutate(female = case_when(sex == 'F' ~ 1, sex == 'M' ~ 0))
lm(data=fake_data, log_earnings_trimmed ~ male + female)
Is this a problem? Not really. Multicollinearity is a sign that a variable is not adding any new information. Notice that with the constant term and a male dummy we can know the mean earnings of females. In this case, the constant term is, by construction, the mean earnings of females, and the male dummy gives the earning premium paid to male workers.
While there are some statistical tests for multicollinearity, nothing beats having the right intuition when running a regression. If there is an obvious case where two variables contain basically the same information, we’ll want to avoid including both in the analysis.
For instance, we might have an age variable that includes both years and months (e.g. if a baby is 1 year and 1 month old, then this age variable would be coded as 1 + 1/12 = 1.083). If we include this variable in a regression that also includes an age variable for only years (e.g the baby’s age would be coded as 1) then we would have the problem of multicollinearity. Because they are not perfectly collinear, R might still produce some results, but the coefficients on these two variables would be biased.
13.3 Heteroskedasticity
When we run a linear regression, we essentially split the outcome into a (linear) part explained by observables (\(x_i\)) and an error term (\(e_i\)):
\[ y_i = a + b x_i + e_i \]
The standard errors in our coefficients depend on \(e_i^2\) (as you might remember from your econometrics courses). Heteroskedasticity refers to the case where the variance of this projection error depends on the observables \(x_i\). For instance, the variance of wages tends to be higher for people who are university educated (some of these people have very high wages) whereas it is small for people who are non-university educated (these people tend to be concentrated in lower paying jobs). R by default assumes that the variance does not depend on the observables, which is known as homoskedasticity. It is safe to say that this is an incredibly restrictive assumption.
While there are tests for heteroskedasticity, the more empirical economists rely on including heteroskedastic consistent standard errors as a default in their regressions. The most standard way to do this is to use feols
, another command similar to lm()
that comes from the fixest
package.
#uncomment this line to install the package! install.packages('fixest')
library(fixest)
= feols(log_earnings_trimmed ~ as.factor(sex) , fake_data) model
summary(model, vcov="HC1")
Best practices are simply to always use robust standard errors in your own research project, since most standard errors will be heteroskedastic.
13.4 Non-linearity
Our regression analysis so far assumes that the relationship between our explained and explanatory variables is linear. If this is not the case, meaning the relationship is non-linear, then we will get inaccurate results from our analysis.
Let’s consider an example. We know that earnings increases with age, but what if economic theory predicts that earnings increase by more for each year of age when workers are younger than when they are older? What we are asking here is whether earnings is increasing with age at a decreasing rate. In essence, we want to check whether there is a concave relation between age and earnings. We can think of several mechanisms for why this relationship might exist: for a young worker, as they age, they get higher wages through increased experience in the job; for an older worker, as they age, those wage increases will be smaller as there are smaller productity gains with each additional year working. In fact, if the productivity of workers decreaseas as they age, perhaps for reasons related to health, then it is possible to find a negative relationship between age and earning beyond a certain age – the relationship would be an inverted U-shape.
We could check if this is the case in our model by including a new interaction term that is simply age interacted with itself, which is the equivalent of including age and age squared. We learned how to do this in Module 12. Let’s include this in the regression above.
<- fake_data %>% mutate(age2 = age^2) fake_data
= lm(log_earnings_trimmed ~ age + age2, fake_data) model
summary(model, vcov="HC1")
There does seem to be some evidence in our regression results that this economic theory is correct, since the coefficient on the interaction term is both negative and statistically significant.
How do we interpret these results? Let’s think about the equation we have just estimated: \[ Earnings_i = \beta_0 + \beta_1 Age_i + \beta_2 Age^2_i + \varepsilon_i \]
This means that earnings of an individual change in the following way with their age: \[ \frac{\partial Earnings_i}{\partial Age_i} = \beta_1 + 2 \beta_2 Age_i \]
Due to the quadratic term, as age changes, the relationship between age and earnings changes as well.
We have just estimated \(\beta_1\) to be positive and equal to 0.079, and \(\beta_2\) to be negative and equal to 0.001.
This means that, as age increases, it’s correlation with earnings decrease: \[ \frac{\partial Earnings_i}{\partial Age_i} = 0.079 - 0.002 Age_i \]
Since the marginal effect changes with the size of \(Age_i\), providing one unique number for the marginal effect becomes difficult.
The most frequently reported version of this effect is the “marginal effect at the means”: the marginal effect of age on earnings when age takes its average value. In our case, this will be equal to 0.079 minus 0.002 times the average value of age.
mean(fake_data$age)
Since the average age of workers is 45, the marginal effect of age at the means is \[ 0.079 - 2 * 0.001 * 45 = -0.011 \]
What does that mean? It means that, for the average person, becoming one year older is associated with a 1% decrease in log earnings.
Notice that this is the effect for the average person. Is the same true for young workers and older workers? To learn how to interpret this non-linearity in age, let’s see how the predicted earnings correlate with age.
We can obtain the predicted earnings with the predict
function and then use a scatterplot to eyeball its relationship with age. We covered how to create scatterplots in Module 8) and the predict
function in Module 10.
Let’s see how to do this step by step.
First, we store our regression in the object fit. Second, we use the function model.frame
to keep only the observations and variables used in our regression. Then, we use predict
to store in data frame yhat the predicted earnings obtained from our regression. Notice that predict
creates a list, therefore we transform it into a data frame using the function as.data.frame
. Finally, we merge the two data frames together using the function cbind
.
# Run the regression with the quadratic term
= lm(log_earnings_trimmed ~ age + age2, fake_data)
fit
# Predict earnigns and save them as yhat in the same data frame
<- model.frame(fit) # keep observations used in regression
datareg <- as.data.frame(predict(fit)) # save predicted earnings as data frame
yhat = cbind(datareg, yhat) # merge the two dataframes
datareg <- datareg %>% rename(yhat = "predict(fit)") # rename datareg
Once we have all the information in one unique data frame called datareg, we can display a scatterplot with age on the x-axis and predicted log-earnings on the y-axis.
# Create scatterplot
ggplot(data = datareg, aes(x=age, y=yhat)) + geom_point()
The scatterplot shows an inverted-U relationship between age and the predicted log-earnings. This relationship implies that, when a worker is very young, aging is positively correlated with earnings. However, after a certain age, this correlation becomes negative and the worker gets lower earnings for each additional year of age. In fact, based on this graph, workers earnings start to decline just after the age of 50. Had we modelled this as a linear model, we would have missed this important piece of information!
13.5 Wrap Up
It is important to always follow best practices for regression analysis. Nonetheless, checking and correcting for outliers, as well as addressing heteroskedasticity, multicollinearity and non-linearity can be more of an art than a science. If you need any guidance on whether or not you need to address these issues, please be certain to speak with your instructor, TA, or supervisor.
13.6 Wrap-up Table
Command | Function |
---|---|
quantile(data$varname, probs = c(0.01, 0.99)) |
It calculates the sample quantiles for the specified probabilities. |
min(data$varname) |
It calculates the minimum value of a variable. |
max(data$varname) |
It calculates the maximum value of a variable. |
feols(depvar ~ indepvar, data) |
It performs a regression using robust standard errors. |
model.frame(object) |
It saves the variables and observations specified by object . |
References
Syntax of quantile() function Syntax of model.frame() function