# Clear the memory from any pre-existing objects
rm(list=ls())
# Load packages
library(dplyr)
library(tidyr)
library(haven)
14 - Panel Data Regressions
Prerequisites
- Run OLS Regressions.
Learning Outcomes
- Prepare data for time-series analysis.
- Run panel data regressions.
- Create lagged variables.
- Understand and work with fixed-effects.
- Correct for heteroskedasticity and serial correlation.
15.0 Intro
This module uses the Penn World Tables which measure income, input, output, and productivity, covering 183 countries between 1950 and 2019. Before beginning this module, download this data in the .dta format.
14.1 What is Panel Data?
In economics, we typically have data consisting of many units observed at a particular point in time. This is called cross-sectional data. There may be several different versions of the data set that are collected over time (monthly, annually, etc.), but each version includes an entirely different set of individuals.
For example, let’s consider a Canadian cross-sectional data set: General Social Survey Cycle 31: Family, 2017. In this data set, the first observation is a 55 year old married woman who lives in Alberta with two children. When the General Social Survey Cycle 25: Family, 2011 was collected six years earlier, there were probably similar women surveyed, but it is extremely unlikely that this exact same woman was included in that data set as well. Even if she was included, we would have no way to match her data over the two years of the survey.
Cross-sectional data allows us to explore variation between individuals at one point in time but does not allow us to explore variation over time for those same individuals.
Time-series data sets contain observations over several years for only one unit, such as country, state, province, etc. For example, measures of income, output, unemployment, and fertility for Canada from 1960 to 2020 would be considered time-series data. Time-series data allows us to explore variation over time for one individual unit (e.g. Canada), but does not allow us to explore variation between individual units (i.e. multiple countries) at any one point in time.
Panel data allows us to observe the same unit across multiple time periods. For example, the Penn World Tables is a panel data set that measures income, output, input, and productivity, covering 183 countries from 1950 to the near present. There are also microdata panel data sets that follow the same people over time. One example is the Canadian National Longitudinal Survey of Children and Youth (NLSCY), which followed the same children from 1994 to 2010, surveying them every two years as they progressed from childhood to adulthood.
Panel data sets allow us to answer questions that we cannot answer with time-series and cross-sectional data. They allow us to simultaneously explore variation over time for individual countries (for example) and variation between individuals at one point in time. This approach is extremely productive for two reasons:
- Panel data sets are large, much larger than if we were to use data collected at one point in time.
- Panel data regressions control for variables that do not change over time and are difficult to measure, such as geography and culture.
In this sense, panel data sets allow us to answer empirical questions that cannot be answered with other types of data such as cross-sectional or time-series data.
Before we move forward exploring panel data sets in this module, we should understand the two main types of panel data:
- A Balanced Panel is a panel data set in which we observe all units over all included time periods. Suppose we have a data set following the school outcomes of a select group of \(N\) children over \(T\) years. This is common in studies which investigate the effects of early childhood interventions on relevant outcomes over time. If the panel data set is balanced, we will see \(T\) observations for each child corresponding to the \(T\) years they have been tracked. As a result, our data set in total will have \(n = N*T\) observations.
- An Unbalanced Panel is a panel data set in which we do not observe all units over all included time periods. Suppose in our data set tracking select children’s education outcomes over time, and that some children drop out of the study. This panel data set would be an unbalanced panel because it would necessarily have \(n < N*T\) observations, since the children who dropped out would not have observations for the years they were no longer in the study.
We learned the techniques to create a balanced panel in Module 6. Essentially, all that is needed is to create a new data set that includes only the years for which there are no missing values.
14.2 Preparing Our Data for Panel Analysis
The first step in any panel data analysis is to identify which variable is the panel variable and which variable is the time variable. The panel variable is the identifier of the units that are observed over time. The second step is indicating that information to R.
We are going to use the Penn World Data (discussed above) in this example. In that data set, the panel variable is either country or countrycode, and the time variable is year.
# Import data (remember to change directory to the location of this data file)
#setwd()
<- read_dta("../econ490-r/pwt100.dta") #change me!
pwt100
# Get summary of the data
summary(pwt100)
You may have noticed that the variable year is an integer (i.e. a number like 2010) and that country and countrycode are character variables (i.e. they are words like “Canada”). Specifying the panel and time variables requires that both of the variables we are using are coded as numeric variables. Moireover, we need to sort our data by the unique identifier (country or countrycode in our case) and tme variable (year).
# Order data according to countrycode and year, and call it df
<- pwt100 %>% arrange(countrycode, year) df
Now that we have sorted our data, we need to tell R that the data frame df contains panel data. We do so by relying on the package plm
, a package containing various tools for Linear Models for Panel data. We load the package plm
and use the pdata.frame()
function to create a panel data frame. In the argument index
of the function pdata.frame()
we have to specify the name of the cross-sectional unit identifier (countrycode) and the time variable (year).
# Install and load plm package
#uncomment to install the package! install.packages("plm")
library(plm)
# Convert dataframe to panel data format
<- pdata.frame(df, index=c("countrycode", "year")) panel_data
To check that we have correctly converted our data in a panel data frame, we can use the class
or the pdim
functions. Note that pdim
tells us if our data frame is balanced or not, as well as the number of cross-sectional unit identifiers and time periods.
class(panel_data)
pdim(panel_data)
14.3 Basic Regressions with Panel Data
For now, we are going to focus on the skills we need to run our own panel data regressions. In section 14.6, there are more details about the econometrics of panel data regressions that may help with the understanding of these approaches. Please make sure you understand that theory before beginning your own research.
Now that we have specified the panel and time variables we are working with, we can begin to run regressions using our panel data. For panel data regressions, we simply replace lm
with the command plm
. The command plm
takes another input, model
. We can specify model
to be fixed effect, random effect, or a pooled OLS. For now, let’s use a pooled OLS with model="pooling"
. More details on the other models will be addressed below.
Let’s try this out by regressing the natural log of GDP per capita on the natural log of human capital.
# Create the two new variables
<- panel_data %>% mutate(lngdp = log(rgdpo/pop), lnhc = log(hc))
panel_data
# Estimate specification
<- plm(lngdp ~ lnhc, data = panel_data, model = "pooling")
model summary(model)
The coefficients in a panel regression are interpreted similarly to those in a basic OLS regression. Because we have taken the natural log of our variables, we can interpret the coefficient on each explanatory variable as being a \(\beta\) % increase in the dependent variable associated with a 1% increase in the explanatory variable.
Thus, in the regression results above, a 1% increase in human capital leads to a roughly 2% increase in real GDP per capita. That’s a huge effect, but then again this model is almost certainly misspecified due to omitted variable bias. Namely, we are likely missing a number of explanatory variables that explain variation in both GDP per capita and human capital, such as savings and population growth rates.
One thing we know is that GDP per capita can be impacted by the individual characteristics of a country that do not change much over time. For example, it is known that distance from the equator has an impact on the standard of living of a country; countries that are closer to the equator are generally poorer than those farther from it. This is a time-invariant characteristic that we might want to control for in our regression. Similarly, we know that GDP per capita could be similarly impacted in many countries by a shock at one point in time. For example, a worldwide global recession would affect the GDP per capita of all countries at a given time such that values of GDP per capita in this time period are uniformly different in all countries from values in other periods. That seems like a time-variant characteristic (time trend) that we might want to control for in our regression. Fortunately, with panel data regressions, we can account for these sources of endogeneity. Let’s look at how panel data helps us do this.
14.3.1 Fixed-Effects Models
We refer to shocks that are invariant based on some variable (e.g. household level shocks that don’t vary with year or time-specific shocks that don’t vary with household) as fixed-effects. For instance, we can define household fixed-effects, time fixed-effects, and so on. Notice that this is an assumption on the error terms, and as such, when we include fixed-effects to our specification they become part of the model we assume to be true.
When we ran our regression of log real GDP per capita on log human capital from earlier, we were concerned about omitted variable bias and endogeneity. Specifically, we were concerned about distance from the equator positively impacting both human capital and real GDP per capita, in which case our measure of human capital would be correlated with our error term, preventing us from interpreting our regression result as causal. We are now able to add country fixed-effects to our regression to account for this and come closer to determining the pure effect of human capital on GDP growth. There are two ways to do this. Let’s look at the more obvious one first.
Approach 1: create a series of country dummy variables and include them in the regression. For example, we would have one dummy variable called “Canada” that would be equal to 1 if the country is Canada and 0 if not. We would have dummy variables for all but one of the countries in this data set to avoid perfect collinearity. Rather than define all of these dummies manually and include them in our regression command, we can simply factorize them and R will include them automatically.
# Factorize countrycode
<- panel_data %>% mutate(countrycode = factor(countrycode)) panel_data
Now we can add the factorized version of country codes to our panel linear model.
<- plm(lngdp ~ lnhc + countrycode, data = panel_data, model = "pooling")
model summary(model)
The problem with this approach is that we end up with a huge table containing the coefficients of every country dummy, none of which we care about. We are interested in the relationship between GDP and human capital, not the mean values of GDP for each country relative to the omitted one. Luckily for us, a well-known result is that controlling for fixed-effects is equivalent to adding multiple dummy variables. This leads us into the second approach to including fixed-effects in a regression.
Approach 2: We can alternatively apply fixed affects to the regression by adding model="within"
as an option on the regression.
<- plm(lngdp ~ lnhc, data = panel_data, model = "within")
model summary(model)
We obtained the same coefficient and standard errors on our explanatory variable using both approaches!
14.3.2 Random-Effects Models
One type of model we can also run is a random-effects model. The main difference between a random and fixed-effects model is that, with the random-effects model, differences across countries are assumed to be random. This allows us to treat time-invariant variables such as latitude as control variables. To run a random-effects model, just add model="random"
as argument of plm
.
<- plm(lngdp ~ lnhc, data = panel_data, model = "random")
model summary(model)
As we can see, with this data and choice of variables, there is little difference in results between all of these models.
This, however, will not always be the case. The test to determine if you should use the fixed-effects model or the random-effects model is called the Hausman test.
To run this test in R, we first have to store the fixed-effect and the random-effect models in two different objects, one called fixed and the other called random.
<- plm(lngdp ~ lnhc, data = panel_data, model = "within")
fixed <- plm(lngdp ~ lnhc, data = panel_data, model = "random") random
Then, we perform the Hausman test by comparing the two objects fixed and random using the function phtest
. Remember, the null hypothesis is that the preferred model is random-effects.
phtest(fixed, random)
As you can see, the p-values associated with this test suggest that we would reject the null hypothesis (random effect) and that we should adopt a fixed-effects model.
14.3.3 What if We Want to Control for Multiple Fixed-Effects?
Let’s say we have run a panel data regression with fixed-effects, and we think that no more needs to be done to control for factors that are constant across our cross-sectional variables (i.e. countries) at any one point in time (i.e. years). However, for very long series (for example those over 20 years), we will want to check that time dummy variables are not also needed.
In R, we can easily do it using two functions: the pFtest()
and the plmtest()
.
First, let’s save our models with and without time fixed-effects in two objects.
# No time fixed-effects
<- plm(lngdp ~ lnhc, data = panel_data, model = "within")
fixed
# Time fixed-effects
<- plm(lngdp ~ lnhc + factor(year), data = panel_data, model = "within") fixed_yearfe
Now that we have saved both models, we can use the test. pFtest()
requires us to use both models as inputs. plmtest()
only needs the model without time fixed-effects as input.
# Option 1: pFtest
pFtest(fixed_yearfe, fixed)
# Option 2: plmtest
plmtest(fixed, c("time"), type=("bp"))
Both tests report a p-value smaller than 0.05, which suggests that we can reject the null hypothesis and need time-fixed-effects in our model.
15.4 Creating New Panel Variables
Panel data also provides us with a new source of variation: variation over time. This means that we have access to a wide variety of variables we can include. For instance, we can create lags (variables in previous periods) and leads (variables in future periods). Once we have defined our panel data set using the pdata.frame
function (which we did earlier), we can create the lags using the dplyr::lag()
function and the leads using the dplyr::lead()
function.
dplyr
by writing the functions in their full names: dplyr::lag()
and dplyr::lead()
. Failing to do so may result in lag() and lead() not to behave as expected.
For example, let’s create a new variable that lags the natural log of GDP per capita by one period.
<- panel_data %>% mutate(lag1_lngdp = dplyr::lag(lngdp,1)) panel_data
If we wanted to lag this same variable ten periods, we would write it as such:
<- panel_data %>% mutate(lag10_lngdp = dplyr::lag(lngdp,10)) panel_data
Let’s inspect the first 50 rows of our data frame to check that we have created lagged variables as expected.
head(panel_data[, c("lngdp", "lag1_lngdp", "lag10_lngdp")],50)
We can include lagged variables directly in our regression if we believe that past values of real GDP per capita influence current levels of real GDP per capita.
<- plm(lngdp ~ lnhc + lag10_lngdp, data = panel_data, model = "within")
model summary(model)
While we included lags from the previous period and 10 periods back as examples, we can use any period for our lags. In fact, including lag variables as controls for recent periods such as one lag back and two lags back is the most common choice for inclusion of past values of independent variables as controls.
Finally, these variables are useful if we are trying to measure the growth rate of a variable. Recall that the growth rate of a variable X is just equal to \(ln(X_{t}) - ln(X_{t-1})\) where the subscripts indicate time.
For example, if we want to now include the natural log of the population growth rate in our regression, we can create that new variable by taking the natural log of the population growth rate \(ln(pop_{t}) - ln(pop_{t-1})\)
# Create log of population
$lnpop <- log(panel_data$pop)
panel_data
# Create the population growth rate
<- panel_data %>% mutate(lnn = lnpop - dplyr::lag(lnpop,1)) panel_data
Another variable that might also be useful is the natural log of the growth rate of GDP per capita.
<- panel_data %>% mutate(dlngdp = lngdp - dplyr::lag(lngdp,1)) panel_data
Let’s put this all together in a regression to see the effect of the growth rate of population on growth rate of GDP per capita, controlling for human capital and the level of GDP per capita in the previous year:
<- plm(dlngdp ~ lag1_lngdp + lnn + lnhc, data = panel_data, model = "within")
model summary(model)
14.5 Is Our Panel Data Regression Properly Specified?
While there are the typical concerns with interpreting the coefficients of regressions (i.e. multicollinearity, inferring causality), there are some topics which require special treatment when working with panel data.
14.5.1 Heteroskedasticity
As always, when running regressions, we must consider whether our residuals are heteroskedastic (not constant for all values of \(X\)). To test our panel data regression for heteroskedasticity in the residuals, we need to calculate a modified Wald statistic. We use the Breusch-Pagan test that can be found in the lmtest
package.
library(lmtest)
Once we have loaded the lmtest
package, we can call the Breusch-Pagan test in the bptest()
function. The first argument of bptest()
is the model we want to test; in our case, it is the specification for log GDP and log human capital. The second argument is the data frame.
bptest(lngdp ~ lnhc + countrycode, data = panel_data)
The null hypothesis is homoskedasticity (or constant variance of the error term). From the output above, we can see that we reject the null hypothesis and conclude that the residuals in this regression are heteroskedastic.
We can control for heteroskedasticity in different ways when we use a fixed-effects model. The coeftest()
function allows us to estimate several heteroskedasticity-consistent covariance estimators.
# Estimate model
<- plm(lngdp ~ lnhc, data = panel_data, model="within")
fixed
# Show original coefficients
coeftest(fixed)
# Show heteroskedasticity consistent coefficients
coeftest(fixed, vcovHC)
14.5.2 Serial Correlation
In time-series setups where we only observe a single unit over time (no cross-sectional dimension) we might be worried that a linear regression model like
\[ Y_t = \alpha + \beta X_t + \varepsilon_t \]
can have errors that not only are heteroskedastic (i.e. that depend on observables \(X_t\)) but can also be correlated across time. For instance, if \(Y_t\) was income, then \(\varepsilon_t\) may represent income shocks (including transitory and permanent components). The permanent income shocks are, by definition, very persistent over time. This would mean that \(\varepsilon_{t-1}\) affects (and thus is correlated with) shocks in the next period \(\varepsilon_t\). This problem is called serial correlation or autocorrelation, and if it exists, the assumptions of the regression model (i.e. unbiasedness, consistency, etc.) are violated. This can take the form of regressions where a variable is correlated with lagged versions of the same variable.
To test our panel data regression for serial correlation, we need to run a Breusch-Godfrey/Woolridge test. In R, we can do it easily with pbgtest()
.
# Estimate model
<- plm(lngdp ~ lnhc, data = panel_data, model="within")
fixed
# Run test
pbgtest(fixed)
The null hypothesis is that there is no serial correlation between residuals. From the output, we see that we cannot reject the null hypothesis and conclude the variables are correlated with lagged versions of themselves. One method for dealing with this serial correlation in panel data regression is by using again the coeftest()
function, this time with the Arellano method of computing the covariance matrix. Note that the Arellano method allows a fully general structure with respect to both heteroskedasticity and serial correlation, so that our standard errors would effectively be robust to both threats.
# Estimate model
<- plm(lngdp ~ lnhc, data = panel_data, model="within")
fixed
# Show original coefficients
coeftest(fixed)
# Show heteroskedasticity and serial correlation consistent coefficients
coeftest(fixed, vcovHC(fixed, method="arellano"))
14.5.3 Granger Causality
In the regressions that we have been running in this example, we have found that the level of human capital is correlated with the level of GDP per capita. But have we proven that having high human capital causes countries to be wealthier? Or is is possible that wealthier countries can afford to invest in human capital? This is known as the issue of reverse causality, and arises when our independent variable determines our dependent variable.
The Granger Causality test allows use to unpack some of the causality in these regressions. While understanding how this test works is beyond the scope of this notebook, we can look at an example using this data.
The first thing we need to do is ensure that our panel is balanced. In the Penn World Tables, there are no missing values for real GDP and for population, but there are missing values for human capital. We can balance our panel by simply dropping all of the observations that do not include that measure.
<- panel_data %>%
panel_data drop_na(lnhc)
Next, we can run the test that is provided by R for Granger Causality: grangertest()
. The first input is the model we want to use, the second input is the data, and the optional third input is the number of lags we want to use (by default, R uses only 1 lag).
<- grangertest(lngdp ~ lnhc, data = panel_data, order=3)
granger_test print(granger_test)
Note that R gives us two models. In model 1, both previous values of GDP and human capital are included: this is an unrestricted model that includes all Granger-causal terms. In model 2, the Granger-causal terms are omitted and only previous values of GDP are included.
From our results, we can reject the null hypothesis of lack of Granger causality. The evidence seems to suggest that high levels of human capital cause countries to be wealthier.
Please speak to your instructor, supervisor, or TA if you need help with this test.
14.6 How is Panel Data Helpful?
In typical cross-sectional settings, it is hard to defend the selection on observables assumption (otherwise known as conditional independence). However, panel data allows us to control for unobserved time-invariant heterogeneity.
Consider the following example. Household income \(y_{jt}\) at time \(t\) can be split into two components:
\[ y_{jt} = e_{jt} + \Psi_{j} \]
where \(\Psi_{j}\) is a measure of unobserved household-level determinants of income, such as social programs targeted towards certain households.
Consider what happens when we compute each \(j\) household’s average income, average value of \(e\), and average value of \(\Psi\) across time \(t\) in the data:
\[ \bar{y}_{J}= \frac{1}{\sum_{j,t} \mathbf{1}\{ j = J \} } \sum_{j,t} y_{jt} \mathbf{1}\{ j = J \} \] \[ \bar{e}_{J}= \frac{1}{\sum_{j,t} \mathbf{1}\{ j = J \} } \sum_{j,t} e_{jt} \mathbf{1}\{ j = J \} \] \[ \bar{\Psi}_{J} = \Psi_{J} \]
Notice that the mean of \(\Psi_{j}\) does not change over time for a fixed household \(j\). Hence, we can subtract the two household level means from the original equation to get:
\[ y_{jt} - \bar{y}_{j} = e_{jt} - \bar{e}_{j} + \underbrace{ \Psi_{j} - \bar{\Psi}_{j} }_\text{equals zero!} \]
Therefore, we are able to get rid of the unobserved heterogeneity in household determinants of income via “de-meaning”! This is called a within-group or fixed-effects transformation. If we believe these types of unobserved errors/shocks are creating endogeneity, we can get rid of them using this powerful trick. In some cases, we may alternatively choose to do a first-difference transformation of our regression specification. This entails subtracting the regression in one period not from it’s expectation across time, but from the regression in the previous period. In this case, time-invariant characteristics are similarly removed from the regression since they are constant across all periods \(t\).
14.7 Common Mistakes
One common mistake is not to respect the order set by R in defining the ordering variables. By default, R orders panel data based on a cross-sectional ID first and a time variable second. If we change the order of the indices, then the estimates produced by R will change.
If we invert the order of the cross-sectional ID (country) and the time variable (year) we may get different results.
# Default order
plm(lngdp ~ lnhc, data = panel_data, model="within")
# Inverted order
plm(lngdp ~ lnhc, data = panel_data, model="within", index=c("year","countrycode"))
Another common mistake happens with the lag()
and lead()
functions. Since there are several functions with this name, it’s always best to specify to R that we want to use the lag()
and lead()
functions from the package dplyr
.
See what happens when we forget to specify it: do you see any difference between lag1_lngdp and new_lag1_lngdp?
# Create lag using dplyr::lag
<- panel_data %>% mutate(lag1_lngdp = dplyr::lag(lngdp,1))
panel_data
# Create lag using lag
<- panel_data %>% mutate(new_lag1_lngdp = lag(lngdp,1))
panel_data
# Check the difference
head(panel_data[, c("lngdp", "lag1_lngdp", "new_lag1_lngdp")],50)
14.8 Wrap Up
In this module, we’ve learned how to address linear regression in the case where we have access to two dimensions: cross-sectional variation and time variation. The usefulness of time variation is that it allows us to control for time-invariant components of the error term which may be causing endogeneity. We also investigated different ways for addressing problems such as heteroskedasticity and autocorrelation in our standard errors when working specifically with panel data. In the next module, we will cover a popular research design method: difference-in-differences.
14.9 Wrap-up Table
Command | Function |
---|---|
pdata.frame |
It transforms a data frame in panel data format. |
plm |
It estimates a linear model with panel data. Use option “within” for Fixed-Effects and “random” for Random-Effects. |
phtest |
It performs a test to choose between Fixed-Effects and Random-Effects model. |
pFtest |
It performs a test to choose whether time fixed-effects are needed. |
dplyr::lag |
It creates lag variables. |
dplyr::lead |
It creates lead variables. |
bptest |
It tests for heteroskedasticity. |
pbgtest |
It tests for serial correlation. |
grangertest |
It tests for Granger causality. |