COMET
  • Get Started
    • Quickstart Guide
    • Install and Use COMET
    • Get Started
  • Learn By Skill Level
    • Getting Started
    • Beginner
    • Intermediate - Econometrics
    • Intermediate - Geospatial
    • Advanced

    • Browse All
  • Learn By Class
    • Making Sense of Economic Data (ECON 226/227)
    • Econometrics I (ECON 325)
    • Econometrics II (ECON 326)
    • Statistics in Geography (GEOG 374)
  • Learn to Research
    • Learn How to Do a Project
  • Teach With COMET
    • Learn how to teach with Jupyter and COMET
    • Using COMET in the Classroom
    • See COMET presentations
  • Contribute
    • Install for Development
    • Write Self Tests
  • Launch COMET
    • Launch on JupyterOpen (with Data)
    • Launch on JupyterOpen (lite)
    • Launch on Syzygy
    • Launch on Colab
    • Launch Locally

    • Project Datasets
    • Github Repository
  • |
  • About
    • COMET Team
    • Copyright Information
  1. R Notebooks
  2. Good Regression Practices (13)
  • Learn to Research


  • STATA Notebooks
    • Setting Up (1)
    • Working with Do-files (2)
    • STATA Essentials (3)
    • Locals and Globals (4)
    • Opening Datasets (5)
    • Creating Variables (6)
    • Within Group Analysis (7)
    • Combining Datasets (8)
    • Creating Meaningful Visuals (9)
    • Combining Graphs (10)
    • Conducting Regression Analysis (11)
    • Exporting Regression Output (12)
    • Dummy Variables and Interactions (13)
    • Good Regression Practices (14)
    • Panel Data Regression (15)
    • Difference in Differences (16)
    • Instrumental Variable Analysis (17)
    • STATA Workflow Guide (18)

  • R Notebooks
    • Setting Up (1)
    • Working with R Scripts (2)
    • R Essentials (3)
    • Opening Datasets (4)
    • Creating Variables (5)
    • Within Group Analysis (6)
    • Combining Datasets (7)
    • Creating Meaningful Visuals (8)
    • Combining Graphs (9)
    • Conducting Regression Analysis (10)
    • Exporting Regression Output (11)
    • Dummy Variables and Interactions (12)
    • Good Regression Practices (13)
    • Panel Data Regression (14)
    • Difference in Differences (15)
    • Instrumental Variable Analysis (16)
    • R Workflow Guide (17)

  • Pystata Notebooks
    • Setting Up (1)
    • Working with Do-files (2)
    • STATA Essentials (3)
    • Locals and Globals (4)
    • Opening Datasets (5)
    • Creating Variables (6)
    • Within Group Analysis (7)
    • Combining Datasets (8)
    • Creating Meaningful Visuals (9)
    • Combining Graphs (10)
    • Conducting Regression Analysis (11)
    • Exporting Regression Output (12)
    • Dummy Variables and Interactions (13)
    • Good Regression Practices (14)
    • Panel Data Regression (15)
    • Difference in Differences (16)
    • Instrumental Variable Analysis (17)
    • STATA Workflow Guide (18)

On this page

  • Prerequisites
  • Learning Outcomes
  • 13.1 Dealing with Outliers
    • 13.1.1 Winsorizing a Dependent Variable
    • 13.1.2 Trimming a Dependent Variable
  • 13.2 Multicollinearity
  • 13.3 Heteroskedasticity
  • 13.4 Non-linearity
  • 13.5 Wrap Up
  • 13.6 Wrap-up Table
  • References
  • Report an issue

Other Formats

  • Jupyter
  1. R Notebooks
  2. Good Regression Practices (13)

13 - Good Regression Practice

econ 490
r
outliers
winsorizing
trimming
multicollinearity
heteroskedasticity
nonlinearity
This notebook covers some good practices that should be implemented when we perform regression analysis. We look at how to handle outliers, multicollinearity, heteroskedasticity, and nonlinearity.
Author

Marina Adshade, Paul Corcuera, Giulia Lo Forte, Jane Platt

Published

29 May 2024

Prerequisites

  1. Importing data into R.
  2. Creating new variables in R.
  3. Running OLS regressions.

Learning Outcomes

  1. Identify and correct for outliers by trimming or winsorizing the dependent variable.
  2. Identify and correct for the problem of multicollinearity.
  3. Identify and correct for the problem of heteroskedasticity.
  4. Identify and correct for the problem of non-linearity.

13.1 Dealing with Outliers

Imagine that we have constructed a dependent variable which contains the earnings growth of individual workers and we see that some worker’s earnings increased by more than 400%. We might wonder if this massive change is just a coding error made by the statisticians that produced the data set. Even without that type of error, though, we might worry that the earnings growth of a small number of observations are driving the results of our analysis. If this is the case, we are producing an inaccurate analysis based on results that are not associated with the majority of our observations.

The standard practice in these cases is to either winsorize or trim the subset of observations that are used in that regression. Both practices remove the outlier values in the dependent variable to allow us to produce a more accurate empirical analysis.

Warning: We should only consider fixing outliers when there is a clear reason to address this issue. Do not apply the tools below if the summary statistics in your data make sense to you in terms of abnormal values. For example, outliers might be a sign that our dependent and explanatory variables have a non-linear relationship. If that is the case, we will want to consider including an interaction term that addresses that non-linearity. A good way to test for this is to create a scatter plot of our dependent and independent variables. This will help us to see if there are actually some outliers, or if there is just a non-linear relationship.

13.1.1 Winsorizing a Dependent Variable

Winsorizing is the process of limiting extreme values in the dependent variable to reduce the effect of (possibly erroneous) outliers. It consists of replacing values below the \(a\)th percentile by that percentile’s value, and values above the \(b\)th percentile by that percentile’s value. Consider the following example using our fake data set:

#Clear the memory from any pre-existing objects
rm(list=ls())

# loading in our packages
library(tidyverse) #This includes ggplot2! 
library(haven)
library(IRdisplay)

#Open the dataset 
fake_data <- read_dta("../econ490-r/fake_data.dta")  
quantile(fake_data$earnings, probs = c(0.01, 0.99))
min(fake_data$earnings)
max(fake_data$earnings)

From the summary statistics above, we can see that that the income earned by the individual at the 1st percentile is 2,831.03 and that the lowest earner in the data set earned 8.88.

We can also see that income earned by the individual at the 99th percentile is only 607,140.32 and that the highest earner in the data earned over 60 millions!

These facts suggest to us that there are large outliers in our dependent variable.

We want to get rid of these outliers by winsorizing our data set. What that means is replacing the earnings of all observations below the 1st percentile by exactly the earnings of the individual at the 1st percentile, and replacing the earnings of all observations above the 99th percentile by exactly the earnings of the individual at the 99th percentile.

To winsorize this data, we do the following 3 step process:

  1. We create a new variable called earnings_winsor which is identical to our earnings variable using mutate. We choose to store the winsorized version of the dependent variable in a different variable so that we don’t overwrite the original data set.
  2. If earnings are smaller than the 1st percentile, we replace the values of earnings_winsor with the earnings of the individual at the 1st percentile: (quantile(fake_data$earnings, probs = 0.01) = 2831).
  3. If earnings are larger than the 99th percentile, we replace the values of earnings_winsor with the earnings of the individual at the 99th percentile: (quantile(fake_data$earnings, probs = 0.99) = 607140 ).

The values of this new variable will be created using the command ifelse(). If earnings are less than 2831, the value of earnings_winsor is replaced by 2831 using this command.

We do this below:

fake_data <- fake_data %>%
  mutate(earnings_winsor = ifelse(earnings<2831, 2831,  ifelse(earnings>607140, 607140, earnings))) 

Now we will use this new dependent variable in our regression analysis. If the outliers were not creating problems, there will be no change in the results. If they were creating problems, those problems will now be fixed.

Let’s take a look at this by first running the regression from Module 10 with the original earning variable.

#Generate the log of earnings_winsor
fake_data <- fake_data %>%
  mutate(log_earnings_winsor = log(earnings_winsor)) 
#Run the regression from Module 10
lm(data=fake_data, log(earnings) ~ as.factor(sex))
#Run the previous regression with the log of earnings_winsor
lm(data=fake_data, log_earnings_winsor ~ as.factor(sex))

Do you think that in this case the outliers were having a significant impact before being winsorized?

13.1.2 Trimming a Dependent Variable

Trimming consists of replacing both values below the \(a\)th percentile and values above the \(b\) percentile by a missing value. This is done to exclude these outliers from regression, since R automatically excluedes missing (NA) observations in the lm command.

Below, we look at the commands for trimming a variable. Notice that the steps are quite similar to when we winsorized the same variable. Here, we are directly creating the log of trimmed earnings in one step. Don’t forget to create a new variable to avoid overwriting our original variable!

fake_data <- fake_data %>%
        mutate(log_earnings_trimmed = ifelse(earnings<2831 ,  NA,  ifelse( earnings  > 607140 , NA, log(earnings)))) 

And here is the result of the regression with the new dependent variable:

lm(data=fake_data, log_earnings_trimmed ~ as.factor(sex))

13.2 Multicollinearity

If two variables are linear combinations of one another they are multicollinear. Ultimately, R does not allow us to include two variables in a regression that are perfect linear combinations of one another, such as a constant or a dummy variable for male and a dummy for female (since female = 1 - male). In all of the regressions above, we see that one of those variables was dropped from the regression “because of collinearity”.

fake_data <- fake_data %>%
        mutate(male = case_when(sex == 'M' ~ 1, sex == 'F' ~ 0)) %>%
        mutate(female = case_when(sex == 'F' ~ 1, sex == 'M' ~ 0))
lm(data=fake_data, log_earnings_trimmed ~ male + female)

Is this a problem? Not really. Multicollinearity is a sign that a variable is not adding any new information. Notice that with the constant term and a male dummy we can know the mean earnings of females. In this case, the constant term is, by construction, the mean earnings of females, and the male dummy gives the earning premium paid to male workers.

While there are some statistical tests for multicollinearity, nothing beats having the right intuition when running a regression. If there is an obvious case where two variables contain basically the same information, we’ll want to avoid including both in the analysis.

For instance, we might have an age variable that includes both years and months (e.g. if a baby is 1 year and 1 month old, then this age variable would be coded as 1 + 1/12 = 1.083). If we include this variable in a regression that also includes an age variable for only years (e.g the baby’s age would be coded as 1) then we would have the problem of multicollinearity. Because they are not perfectly collinear, R might still produce some results, but the coefficients on these two variables would be biased.

13.3 Heteroskedasticity

When we run a linear regression, we essentially split the outcome into a (linear) part explained by observables (\(x_i\)) and an error term (\(e_i\)):

\[ y_i = a + b x_i + e_i \]

The standard errors in our coefficients depend on \(e_i^2\) (as you might remember from your econometrics courses). Heteroskedasticity refers to the case where the variance of this projection error depends on the observables \(x_i\). For instance, the variance of wages tends to be higher for people who are university educated (some of these people have very high wages) whereas it is small for people who are non-university educated (these people tend to be concentrated in lower paying jobs). R by default assumes that the variance does not depend on the observables, which is known as homoskedasticity. It is safe to say that this is an incredibly restrictive assumption.

While there are tests for heteroskedasticity, the more empirical economists rely on including heteroskedastic consistent standard errors as a default in their regressions. The most standard way to do this is to use feols, another command similar to lm() that comes from the fixest package.

#uncomment this line to install the package! install.packages('fixest')
library(fixest)
model = feols(log_earnings_trimmed ~ as.factor(sex) , fake_data)
summary(model, vcov="HC1")

Best practices are simply to always use robust standard errors in your own research project, since most standard errors will be heteroskedastic.

13.4 Non-linearity

Our regression analysis so far assumes that the relationship between our explained and explanatory variables is linear. If this is not the case, meaning the relationship is non-linear, then we will get inaccurate results from our analysis.

Let’s consider an example. We know that earnings increases with age, but what if economic theory predicts that earnings increase by more for each year of age when workers are younger than when they are older? What we are asking here is whether earnings is increasing with age at a decreasing rate. In essence, we want to check whether there is a concave relation between age and earnings. We can think of several mechanisms for why this relationship might exist: for a young worker, as they age, they get higher wages through increased experience in the job; for an older worker, as they age, those wage increases will be smaller as there are smaller productity gains with each additional year working. In fact, if the productivity of workers decreaseas as they age, perhaps for reasons related to health, then it is possible to find a negative relationship between age and earning beyond a certain age – the relationship would be an inverted U-shape.

We could check if this is the case in our model by including a new interaction term that is simply age interacted with itself, which is the equivalent of including age and age squared. We learned how to do this in Module 12. Let’s include this in the regression above.

fake_data <- fake_data %>% mutate(age2 = age^2) 
model = lm(log_earnings_trimmed ~ age + age2, fake_data)
summary(model, vcov="HC1")

There does seem to be some evidence in our regression results that this economic theory is correct, since the coefficient on the interaction term is both negative and statistically significant.

How do we interpret these results? Let’s think about the equation we have just estimated: \[ Earnings_i = \beta_0 + \beta_1 Age_i + \beta_2 Age^2_i + \varepsilon_i \]

This means that earnings of an individual change in the following way with their age: \[ \frac{\partial Earnings_i}{\partial Age_i} = \beta_1 + 2 \beta_2 Age_i \]

Due to the quadratic term, as age changes, the relationship between age and earnings changes as well.

We have just estimated \(\beta_1\) to be positive and equal to 0.079, and \(\beta_2\) to be negative and equal to 0.001.

This means that, as age increases, it’s correlation with earnings decrease: \[ \frac{\partial Earnings_i}{\partial Age_i} = 0.079 - 0.002 Age_i \]

Since the marginal effect changes with the size of \(Age_i\), providing one unique number for the marginal effect becomes difficult.

The most frequently reported version of this effect is the “marginal effect at the means”: the marginal effect of age on earnings when age takes its average value. In our case, this will be equal to 0.079 minus 0.002 times the average value of age.

mean(fake_data$age)

Since the average age of workers is 45, the marginal effect of age at the means is \[ 0.079 - 2 * 0.001 * 45 = -0.011 \]

What does that mean? It means that, for the average person, becoming one year older is associated with a 1% decrease in log earnings.

Notice that this is the effect for the average person. Is the same true for young workers and older workers? To learn how to interpret this non-linearity in age, let’s see how the predicted earnings correlate with age.

We can obtain the predicted earnings with the predict function and then use a scatterplot to eyeball its relationship with age. We covered how to create scatterplots in Module 8) and the predict function in Module 10.

Let’s see how to do this step by step.

First, we store our regression in the object fit. Second, we use the function model.frame to keep only the observations and variables used in our regression. Then, we use predict to store in data frame yhat the predicted earnings obtained from our regression. Notice that predict creates a list, therefore we transform it into a data frame using the function as.data.frame. Finally, we merge the two data frames together using the function cbind.

# Run the regression with the quadratic term
fit = lm(log_earnings_trimmed ~ age + age2, fake_data)

# Predict earnigns and save them as yhat in the same data frame
datareg <- model.frame(fit)          # keep observations used in regression
yhat <- as.data.frame(predict(fit))  # save predicted earnings as data frame
datareg = cbind(datareg, yhat)       # merge the two dataframes
datareg <- datareg %>% rename(yhat = "predict(fit)") # rename

Once we have all the information in one unique data frame called datareg, we can display a scatterplot with age on the x-axis and predicted log-earnings on the y-axis.

# Create scatterplot
ggplot(data = datareg, aes(x=age, y=yhat)) + geom_point()

The scatterplot shows an inverted-U relationship between age and the predicted log-earnings. This relationship implies that, when a worker is very young, aging is positively correlated with earnings. However, after a certain age, this correlation becomes negative and the worker gets lower earnings for each additional year of age. In fact, based on this graph, workers earnings start to decline just after the age of 50. Had we modelled this as a linear model, we would have missed this important piece of information!

Note: If there is a theoretical reason for believing that non-linearity exists, R provides some tests for non-linearity. We can also create a scatter-plot to see if we observe a non-linear relationship in the data. We covered that approach in Module 8.

13.5 Wrap Up

It is important to always follow best practices for regression analysis. Nonetheless, checking and correcting for outliers, as well as addressing heteroskedasticity, multicollinearity and non-linearity can be more of an art than a science. If you need any guidance on whether or not you need to address these issues, please be certain to speak with your instructor, TA, or supervisor.

13.6 Wrap-up Table

Command Function
quantile(data$varname, probs = c(0.01, 0.99)) It calculates the sample quantiles for the specified probabilities.
min(data$varname) It calculates the minimum value of a variable.
max(data$varname) It calculates the maximum value of a variable.
feols(depvar ~ indepvar, data) It performs a regression using robust standard errors.
model.frame(object) It saves the variables and observations specified by object.

References

Syntax of quantile() function Syntax of model.frame() function

Dummy Variables and Interactions (12)
Panel Data Regression (14)
  • Creative Commons License. See details.
 
  • Report an issue
  • The COMET Project and the UBC Vancouver School of Economics are located on the traditional, ancestral and unceded territory of the xʷməθkʷəy̓əm (Musqueam) and Sḵwx̱wú7mesh (Squamish) peoples.