{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 15 - Panel Data Regressions\n", "\n", "Marina Adshade, Paul Corcuera, Giulia Lo Forte, Jane Platt \n", "2024-05-29\n", "\n", "## Prerequisites\n", "\n", "1. Run OLS Regressions.\n", "\n", "## Learning Outcomes\n", "\n", "1. Prepare data for time series analysis.\n", "2. Run panel data regressions.\n", "3. Create lagged variables.\n", "4. Understand and work with fixed-effects.\n", "5. Correct for heteroskedasticity and serial correlation.\n", "\n", "## 15.0 Intro\n", "\n", "This module uses the [Penn World\n", "Tables](https://www.rug.nl/ggdc/productivity/pwt/?lang=en) which measure\n", "income, input, output, and productivity, covering 183 countries between\n", "1950 and 2019. Before beginning this module, download this data in the\n", "specified Stata format." ], "id": "8603b3d6-80b3-429d-b07b-f82985a5867e" }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import stata_setup\n", "stata_setup.config('C:\\Program Files\\Stata18/','se')" ], "id": "5e46e466" }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ ">>> import sys\n", ">>> sys.path.append('/Applications/Stata/utilities') # make sure this is the same as what you set up in Module 01, Section 1.3: Setting Up the STATA Path\n", ">>> from pystata import config\n", ">>> config.init('se')" ], "id": "09fa9c02" }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 15.1 What is Panel Data?\n", "\n", "In economics, we typically have data consisting of many units observed\n", "at a particular point in time. This is called cross-sectional data.\n", "There may be several different versions of the data set that are\n", "collected over time (monthly, annually, etc.), but each version includes\n", "an entirely different set of individuals.\n", "\n", "For example, let’s consider a Canadian cross-sectional data set:\n", "*General Social Survey Cycle 31: Family, 2017*. In this data set, the\n", "first observation is a 55 year old married woman who lives in Alberta\n", "with two children. When the *General Social Survey Cycle 25: Family,\n", "2011* was collected six years earlier, there were probably similar women\n", "surveyed, but it is extremely unlikely that this exact same woman was\n", "included in that data set as well. Even if she was included, we would\n", "have no way to match her data over the two years of the survey.\n", "\n", "Cross-sectional data allows us to explore variation between individuals\n", "at one point in time but does not allow us to explore variation over\n", "time for those same individuals.\n", "\n", "Time-series data sets contain observations over several years for only\n", "one unit, such as country, state, province, etc. For example, measures\n", "of income, output, unemployment, and fertility for Canada from 1960 to\n", "2020 would be considered time-series data. Time-series data allows us to\n", "explore variation over time for one individual unit (e.g. Canada), but\n", "does not allow us to explore variation between individual units\n", "(i.e. multiple countries) at any one point in time.\n", "\n", "Panel data allows us to observe the same unit across multiple time\n", "periods. For example, the [Penn World\n", "Tables](https://www.rug.nl/ggdc/productivity/pwt/?lang=en) is a panel\n", "data set that measures income, output, input, and productivity, covering\n", "183 countries from 1950 to the near present. There are also microdata\n", "panel data sets that follow the same people over time. One example is\n", "the Canadian National Longitudinal Survey of Children and Youth (NLSCY),\n", "which followed the same children from 1994 to 2010, surveying them every\n", "two years as they progressed from childhood to adulthood.\n", "\n", "Panel data sets allow us to answer questions that we cannot answer with\n", "time-series and cross-sectional data. They allow us to simultaneously\n", "explore variation over time for individual countries (for example) and\n", "variation between individuals at one point in time. This approach is\n", "extremely productive for two reasons:\n", "\n", "1. Panel data sets are large, much larger than if we were to use data\n", " collected at one point in time.\n", "2. Panel data regressions control for variables that do not change over\n", " time and are difficult to measure, such as geography and culture.\n", "\n", "In this sense, panel data sets allow us to answer empirical questions\n", "that cannot be answered with other types of data such as cross-sectional\n", "or time-series data.\n", "\n", "Before we move forward exploring panel data sets in this module, we\n", "should understand the two main types of panel data:\n", "\n", "- A **Balanced Panel** is a panel data set in which we observe *all*\n", " units over *all* included time periods. Suppose we have a data set\n", " following the school outcomes of a select group of $N$ children over\n", " $T$ years. This is common in studies which investigate the effects\n", " of early childhood interventions on relevant outcomes over time. If\n", " the panel data set is balanced, we will see $T$ observations for\n", " each child corresponding to the $T$ years they have been tracked. As\n", " a result, our data set in total will have $n = N*T$ observations.\n", "- An **Unbalanced Panel** is a panel data set in which we do *not*\n", " observe all units over all included time periods. Suppose in our\n", " data set tracking select children’s education outcomes over time,\n", " and that some children drop out of the study. This panel data set\n", " would be an unbalanced panel because it would necessarily have\n", " $n < N*T$ observations, since the children who dropped out would not\n", " have observations for the years they were no longer in the study.\n", "\n", "We learned the techniques to create a balanced panel in [Module\n", "7](https://comet.arts.ubc.ca/docs/Researc/econ490-pystata/07_Within_Group.html).\n", "Essentially, all that is needed is to create a new data set that\n", "includes only the years for which there are no missing values.\n", "\n", "## 15.2 Preparing Our Data for Panel Analysis\n", "\n", "The first step in any panel data analysis is to identify which variable\n", "is the panel variable and which variable is the time variable. The panel\n", "variable is the identifier of the units that are observed over time. The\n", "second step is indicating that information to Stata.\n", "\n", "We are going to use the Penn World Data (discussed above) in this\n", "example. In that data set, the panel variable is either *country* or\n", "*countrycode*, and the time variable is *year*." ], "id": "1aba66c2-1076-49c1-8836-e20ae6001a15" }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "clear*\n", "*cd \"\"\n", "use pwt1001, clear\n", "describe country countrycode year" ], "id": "c4f82b24" }, { "cell_type": "markdown", "metadata": {}, "source": [ "When the decribe command executed, did you see that the variable *year*\n", "is an integer (i.e. a number like 2020) and that *country* or\n", "*countrycode* are string variables (i.e. they are words like “Canada”)?\n", "Specifying the panel and time variables requires that both of the\n", "variables we are using are coded as numeric variables, and so our first\n", "step is to create a new numeric variable that represents the country\n", "variable.\n", "\n", "To do this, we can use the `encode` command that we saw in [Module\n", "6](https://comet.arts.ubc.ca/docs/Research/econ490-stata/06_Creating_Variables.html)." ], "id": "8f1250f6-1d02-4e2c-84a7-51f15271bfca" }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "encode countrycode, gen(ccode) \n", "\n", "label var ccode \"Numeric code that represents the country\"" ], "id": "47162de0" }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see in our data editor that this command created a unique code\n", "for each country and saved it in a variable that we have named *ccode*.\n", "For example, in the data editor we can see that Canada was given the\n", "code 31 and Brazil was given the code 25.\n", "\n", "Now we are able to proceed with specifying both our panel and time\n", "variables by using the command `xtset`. With this command, we first list\n", "the panel variable and then the time variable, followed by the interval\n", "of observation." ], "id": "9eb1ebb9-443b-4c32-8f7b-dbaeefa43339" }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtset ccode year, yearly" ], "id": "a23cfc89" }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can tell that we have done this correctly when the output indicates\n", "that the “Time variable” is “year”.\n", "\n", "Within our panel data set, our use of this command above states that we\n", "observe countries (indicated by country codes) over many time periods\n", "that are separated into year groupings (delta = 1 year, meaning that\n", "each country has an observation for each year, specified by the *yearly*\n", "option). The option for periodicity of the observations is helpful. For\n", "instance, if we wanted each country to have an observation for every two\n", "years instead of every year, we would specify delta(2) as our\n", "periodicity option to `xtset`.\n", "\n", "Always make sure to check the output of `xtset` carefully to see that\n", "the time variable and panel variable have been properly specified.\n", "\n", "## 15.3 Basic Regressions with Panel Data\n", "\n", "For now, we are going to focus on the skills we need to run our own\n", "panel data regressions. In section 15.6, there are more details about\n", "the econometrics of panel data regressions that may help with the\n", "understanding of these approaches. Please make sure you understand that\n", "theory before beginning your own research.\n", "\n", "Now that we have specified the panel and time variables we are working\n", "with, we can begin to run regressions using our panel data. For panel\n", "data regressions we simply replace `regress` witht the command `xtreg`.\n", "\n", "Let’s try this out by regressing the natural log of GDP per capita on\n", "the natural log of human capital. We have included the `describe` to\n", "help us understand the variables we are using in this exercise." ], "id": "c487354c-45af-4ff8-9ab9-c9f31e376f85" }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "describe rgdpe pop hc\n", "\n", "generate lngdp = ln(rgdpo/pop)\n", "generate lnhc = ln(hc)\n", "\n", "xtreg lngdp lnhc" ], "id": "3961f0f2" }, { "cell_type": "markdown", "metadata": {}, "source": [ "The coefficients in a panel regression are interpreted similarly to\n", "those in a basic OLS regression. Because we have taken the natural log\n", "of our variables, we can interpret the coefficient on each explanatory\n", "variable as being a $\\beta$ % increase in the dependent variable\n", "associated with a 1% increase in the explanatory variable.\n", "\n", "Thus, in the regression results above, a 1% increase in human capital\n", "leads to a roughly 2% increase in real GDP per capita. That’s a huge\n", "effect, but then again this model is almost certainly misspecified due\n", "to omitted variable bias. Namely, we are likely missing a number of\n", "explanatory variables that explain variation in both GDP per capita and\n", "human capital, such as savings and population growth rates.\n", "\n", "One thing we know is that GDP per capita can be impacted by the\n", "individual characteristics of a country that do not change much over\n", "time. For example, it is known that distance from the equator has an\n", "impact on the standard of living of a country; countries that are closer\n", "to the equator are generally poorer than those farther from it. This is\n", "a time-invariant characteristic that we might want to control for in our\n", "regression. Similarly, we know that GDP per capita could be similarly\n", "impacted in many countries by a shock at one point in time. For example,\n", "a worldwide global recession would affect the GDP per capita of all\n", "countries at a given time such that values of GDP per capita in this\n", "time period are uniformly different in all countries from values in\n", "other periods. That seems like a time-variant characteristic (time\n", "trend) that we might want to control for in our regression. Fortunately,\n", "with panel data regressions, we can account for these sources of\n", "endogeneity. Let’s look at how panel data helps us do this.\n", "\n", "### 15.3.1 Fixed-Effects Models\n", "\n", "We refer to shocks that are invariant based on some variable\n", "(e.g. household level shocks that don’t vary with year or time-specific\n", "shocks that don’t vary with household) as **fixed-effects**. For\n", "instance, we can define household fixed-effects, time fixed-effects, and\n", "so on. Notice that this is an assumption on the error terms, and as\n", "such, when we include fixed-effects to our specification they become\n", "part of the model we assume to be true.\n", "\n", "When we ran our regression of log real GDP per capita on log human\n", "capital from earlier, we were concerned about omitted variable bias and\n", "endogeneity. Specifically, we were concerned about distance from the\n", "equator positively impacting both human capital and real GDP per capita,\n", "in which case our measure of human capital would be correlated with our\n", "error term, preventing us from interpreting our regression result as\n", "causal. We are now able to add country fixed-effects to our regression\n", "to account for this and come closer to determining the pure effect of\n", "human capital on GDP growth. There are two ways to do this. Let’s look\n", "at the more obvious one first.\n", "\n", "**Approach 1**: create a series of country dummy variables and include\n", "them in the regression. For example, we would have one dummy variable\n", "called “Canada” that would be equal to 1 if the country is Canada and 0\n", "if not. We would have dummy variables for all but one of the countries\n", "in this data set to avoid perfect collinearity. Rather than defining all\n", "of these dummies manually and including them in our `regress` command,\n", "we can simply add `i.varname` into our regression. Stata will then\n", "manually create all of the country dummy variables for us." ], "id": "5c84da33-46e8-4d66-8d63-b7f215de2d35" }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtreg lngdp lnhc i.ccode" ], "id": "06b9a068" }, { "cell_type": "markdown", "metadata": {}, "source": [ "The problem with this approach is that we end up with a huge table\n", "containing the coefficients of every country dummy, none of which we\n", "care about. We are interested in the relationship between GDP and human\n", "capital, not the mean values of GDP for each country relative to the\n", "omitted one. Luckily for us, a well-known result is that controlling for\n", "fixed-effects is equivalent to adding multiple dummy variables. This\n", "leads us into the second approach to including fixed-effects in a\n", "regression.\n", "\n", "**Approach 2**: We can alternatively apply fixed-effects to the\n", "regression by adding `fe` as an option on the regression." ], "id": "192b1eb6-86dc-4c49-b261-70a2a6b2a2d8" }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtreg lngdp lnhc, fe" ], "id": "0df759de" }, { "cell_type": "markdown", "metadata": {}, "source": [ "We obtained the same coefficient and standard errors on our `lnhc`\n", "explanatory variable using both approaches!\n", "\n", "### 15.3.2 Random-Effects Models\n", "\n", "One type of model we can also run is a **random-effects model**. The\n", "main difference between a random and fixed-effects model is that, with\n", "the random-effects model, differences across countries are assumed to be\n", "random. This allows us to treat time-invariant variables such as\n", "latitude as control variables. To run a random-effects model, just add\n", "`re` as an option in `xtreg` like below." ], "id": "5ce6a452-b1d6-484d-8ff2-60acb81767ed" }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtreg lngdp lnhc, re" ], "id": "5d759e80" }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, with this data and choice of variables, there is little\n", "difference in results between all of these models.\n", "\n", "This, however, will not always be the case. The test to determine if you\n", "should use the fixed-effects model (fe) or the random-effects model (re)\n", "is called the Hausman test.\n", "\n", "To run this test in Stata, start by running a fixed-effects model and\n", "ask Stata to store the estimation results under then name “fixed”:" ], "id": "35440d0c-de91-4e46-9b5c-28af467ef902" }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtreg lngdp lnhc, fe\n", "\n", "estimates store fixed " ], "id": "31860315" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, run a random-effects model and again ask Stata to store the\n", "estimation results as “random”:" ], "id": "7e812077-c539-4415-9328-2394a0278a9e" }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtreg lngdp lnhc, re \n", "\n", "estimates store random" ], "id": "dbf6822a" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, run the command for the Hausman test, which compares the two sets\n", "of estimates:" ], "id": "d9a2abb1-d236-4c06-9863-f8792c757a76" }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "hausman fixed random" ], "id": "a05ae99d" }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, the results of this test suggest that we would reject the\n", "null hypothesis that the random-effects model is preferred, and thus we\n", "should adopt a fixed-effects model.\n", "\n", "### 15.3.3 What if We Want to Control for Multiple Fixed-Effects?\n", "\n", "Let’s say we have run a panel data regression with fixed-effects, and we\n", "think that no more needs to be done to control for factors that are\n", "constant across our cross-sectional variables (i.e. countries) at any\n", "one point in time (i.e. years). However, for very long series (for\n", "example those over 20 years), we will want to check that time dummy\n", "variables are not also needed.\n", "\n", "The Stata command `testparm` tests whether the coefficients on three or\n", "more variables are equal to zero. When used after a fixed-effects panel\n", "data regression that includes time dummies, `testparm` will tell us if\n", "the dummies are equal to 0. If they are equal to zero, then no\n", "time-fixed-effects are needed. If they are not, we will want to include\n", "them in all of our regressions.\n", "\n", "As we have already learned, we can add `i.year` to include a new dummy\n", "variable for each year and include that in our regression. Now, let’s\n", "test to see if that is necessary in the fixed-effects regression by\n", "running the command for `testparm`." ], "id": "b3bd1d98-a687-4539-9373-8c46d6fd2cc1" }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtreg lngdp lnhc i.year\n", "\n", "testparm i.year" ], "id": "4af7d616" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Stata runs a joint test to see if the coefficients on the dummies for\n", "all years are equal to 0. The null hypothesis on this test is that they\n", "are equal to zero. As the test statistic is less than 0.05, we can\n", "reject the null hypothesis and will want to include the year dummies in\n", "our analysis.\n", "\n", "## 15.4 Creating New Panel Variables\n", "\n", "Panel data also provides us with a new source of variation: variation\n", "over time. This means that we have access to a wide variety of variables\n", "we can include. For instance, we can create lags (variables in previous\n", "periods) and leads (variables in future periods). Once we have defined\n", "our panel data set using the `xtset` command (which we did earlier) we\n", "can create the lags using `Lnumber.variable` and the leads using\n", "`Fnumber.variable`.\n", "\n", "For example, let’s create a new variable that lags the natural log of\n", "GDP per capita by one period." ], "id": "1ac1fb17-556b-4437-97bf-7ec15adef4ff" }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "generate lag1_lngdp = L1.lngdp" ], "id": "a0f020b7" }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we wanted to lag this same variable ten periods, we would write it as\n", "such:" ], "id": "f5ab6338-14fd-4dab-bce5-19f6f73dd562" }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "generate lag10_lngdp = L10.lngdp" ], "id": "dc1df26b" }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can include lagged variables directly in our regression if we believe\n", "that past values of real GDP per capita influence current levels of real\n", "GDP per capita." ], "id": "dbea7c35-04db-4a57-819c-eedb05dde495" }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtreg lngdp L1.lngdp L10.lngdp lnhc i.year, fe" ], "id": "d6d33178" }, { "cell_type": "markdown", "metadata": {}, "source": [ "While we included lags from the previous period and 10 periods back as\n", "examples, we can use any period for our lags. In fact, including lag\n", "variables as controls for recent periods such as one lag back and two\n", "lags back is the most common choice for inclusion of past values of\n", "independent variables as controls.\n", "\n", "Finally, these variables are useful if we are trying to measure the\n", "growth rate of a variable. Recall that the growth rate of a variable X\n", "is just equal to $ln(X_{t}) - ln(X_{t-1})$ where the subscripts indicate\n", "time.\n", "\n", "For example, if we want to now include the natural log of the population\n", "growth rate in our regression, we can create that new variable by taking\n", "the natural log of the population growth rate\n", "$ln(pop_{t}) - ln(pop_{t-1})$" ], "id": "5234c907-7642-4da3-bacd-d829fc9b89f8" }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "generate lnn = ln(ln(pop)-ln(L1.pop))" ], "id": "bd9297f5" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another variable that might also be useful is the natural log of the\n", "growth rate of GDP per capita." ], "id": "f9d0198e-97cc-4b0b-be43-999ba21ad9e8" }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "generate dlngdp=ln(lngdp - L1.lngdp)" ], "id": "d589f8b9" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let’s put this all together in a regression and see what results we get:" ], "id": "543e9a12-e649-4130-aef2-3239e0c334c8" }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtreg dlngdp L1.lngdp lnhc lnn i.year, fe" ], "id": "a63fb161" }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 15.5 Is our Panel Data Regression Properly Specified?\n", "\n", "While there are the typical concerns with interpreting the coefficients\n", "of regressions (i.e. multicollinearity, inferring causality), there are\n", "some topics which require special treatment when working with panel\n", "data.\n", "\n", "### 15.5.1 Heteroskedasticity\n", "\n", "As always, when running regressions, we must consider whether our\n", "residuals are heteroskedastic (not constant for all values of $X$). To\n", "test our panel data regression for heteroskedasticity in the residuals,\n", "we need to calculate a modified Wald statistic. Fortunately, there is a\n", "Stata package available for installation that will make this test very\n", "easy for us to conduct. To install this package into your version of\n", "Stata, simply type:" ], "id": "51ab750c-b64d-4053-b9b5-1b92be1c8e66" }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "ssc install xttest3" ], "id": "b994cd31" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let’s now test this with our original regression, the regression of log\n", "real GDP per capita on log human capital with the inclusion of\n", "fixed-effects." ], "id": "fd328fec-0358-4ee0-ad29-3d8ffd2b66b0" }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtreg lngdp lnhc, fe\n", "xttest3" ], "id": "8a05c064" }, { "cell_type": "markdown", "metadata": {}, "source": [ "The null hypothesis is homoskedasticity (or constant variance of the\n", "error term). From the output above, we can see that we reject the null\n", "hypothesis and conclude that the residuals in this regression are\n", "heteroskedastic.\n", "\n", "The best method for dealing with heteroskedasticity in panel data\n", "regression is by using generalized least squares, or GLS. There are a\n", "number of techniques to estimate GLS equations in Stata, but the\n", "recommended approach is the Prais-Winsten method.\n", "\n", "This is easily implemented by replacing the command `xtreg` with\n", "`xtpcse` and including the option `het`." ], "id": "f68c0d1f-68ff-4db6-ba9e-e55be53dc944" }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtpcse lngdp lnhc, het" ], "id": "c75cd7a7" }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 15.5.2 Serial Correlation\n", "\n", "In time-series setups where we only observe a single unit over time (no\n", "cross-sectional dimension) we might be worried that a linear regression\n", "model like\n", "\n", "$$ Y_t = \\alpha + \\beta X_t + \\varepsilon_t $$\n", "\n", "can have errors that not only are heteroskedastic (i.e. that depend on\n", "observables $X_t$) but can also be correlated across time. For instance,\n", "if $Y_t$ was income, then $\\varepsilon_t$ may represent income shocks\n", "(including transitory and permanent components). The permanent income\n", "shocks are, by definition, very persistent over time. This would mean\n", "that $\\varepsilon_{t-1}$ affects (and thus is correlated with) shocks in\n", "the next period $\\varepsilon_t$. This problem is called serial\n", "correlation or autocorrelation, and if it exists, the assumptions of the\n", "regression model (i.e. unbiasedness, consistency, etc.) are violated.\n", "This can take the form of regressions where a variable is correlated\n", "with lagged versions of the same variable.\n", "\n", "To test our panel data regression for serial correlation, we need to run\n", "a Woolridge test. Fortunately, there are multiple packages in Stata\n", "available for installation that make this test automatic to conduct. Run\n", "the command below to see some of these packages." ], "id": "f1a91f36-e1d2-4889-bdca-3e3561f7f94f" }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "search xtserial" ], "id": "ef37a5ae" }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can choose any one of these packages and follow the (brief)\n", "instructions to install it. Once it’s installed, we can conduct the\n", "Woolridge test for autocorrelation below." ], "id": "42fcc4a9-3c7d-4cd9-b7cd-ae28d1af8fa2" }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtserial lngdp lnhc" ], "id": "263706b8" }, { "cell_type": "markdown", "metadata": {}, "source": [ "The null hypothesis is that there is no serial correlation between\n", "residuals. From the output, we can see that we reject the null\n", "hypothesis and conclude the variables are correlated with lagged\n", "versions of themselves. One method for dealing with this is by using the\n", "same Prais-Winsten method to estimate a GLS equation. This is easily\n", "implemented by replacing the command `xtreg` with `xtpcse` and including\n", "the option `corr(ar1)`." ], "id": "03ba98b3-23f3-4078-bed0-67a2b0bab28d" }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtpcse lngdp lnhc, het corr(ar1) " ], "id": "41d4da50" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that we have continued to use the `het` option to account for\n", "heteroskedasticity in our standard errors. We can also see that our\n", "results have not drifted significantly from what they were originally\n", "when running our first, most simple regression of log GDP per capita on\n", "log human capital.\n", "\n", "**Warning:** The Prais-Winsten approach does not control for panel and\n", "time fixed-effects. You will want to use `testparm` to test both the\n", "need for year fixed-effects and, in the example we have been using here,\n", "country fixed-effects. Now that we have used `encode` to create a new\n", "country variable that is numeric, we can include country dummies simply\n", "by including `i.ccode` into our regression.\n", "\n", "### 15.5.3 Granger Causality\n", "\n", "In the regressions that we have been running in this example, we have\n", "found that the level of human capital is correlated with the level of\n", "GDP per capita. But have we proven that having high human capital causes\n", "countries to be wealthier? Or is is possible that wealthier countries\n", "can afford to invest in human capital? This is known as the issue of\n", "**reverse causality**, and arises when our independent variable\n", "determines our dependent variable.\n", "\n", "The Granger Causality test allows use to unpack some of the causality in\n", "these regressions. While understanding how this test works is beyond the\n", "scope of this notebook, we can look at an example using this data.\n", "\n", "The first thing we need to do is ensure that our panel is balanced. In\n", "the Penn World Tables, there are no missing values for real GDP and for\n", "population, but there are missing values for human capital. We can\n", "balance our panel by simply dropping all of the observations that do not\n", "include that measure." ], "id": "183e17a8-405b-4ef8-82dc-8fe1f7ee25e3" }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "drop if hc==." ], "id": "2a4ddc57" }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we can run the test that is provided by Stata for Granger\n", "Causality: `xtgcause`. We need to install this package before we begin\n", "using the same approach you used with `xtserial` above.\n", "\n", "Now let’s test the causality between GDP and human capital!" ], "id": "4338a0b1-382d-40bf-88b1-19ec8f18335e" }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "%%stata\n", "\n", "xtgcause lngdp lnhc" ], "id": "f75f2138" }, { "cell_type": "markdown", "metadata": {}, "source": [ "From our results, we can reject the null hypothesis that high levels of\n", "wealth in countries causes higher levels of human capital. The evidence\n", "seems to suggest that high human capital causes countries to be\n", "wealthier.\n", "\n", "Please speak to your instructor, supervisor, or TA if you need help with\n", "this test.\n", "\n", "## 15.6 How is Panel Data Helpful?\n", "\n", "In typical cross-sectional settings, it is hard to defend the selection\n", "on observables assumption (otherwise known as conditional independence).\n", "However, panel data allows us to control for unobserved time-invariant\n", "heterogeneity.\n", "\n", "Consider the following example. Household income $y_{jt}$ at time $t$\n", "can be split into two components:\n", "\n", "$$\n", "y_{jt} = e_{jt} + \\Psi_{j}\n", "$$\n", "\n", "where $\\Psi_{j}$ is a measure of unobserved household-level determinants\n", "of income, such as social programs targeted towards certain households.\n", "\n", "Consider what happens when we compute each $j$ household’s average\n", "income, average value of $e$, and average value of $\\Psi$ across time\n", "$t$ in the data:\n", "\n", "$$\n", "\\bar{y}_{J}= \\frac{1}{\\sum_{j,t} \\mathbf{1}\\{ j = J \\} } \\sum_{j,t} y_{jt} \\mathbf{1}\\{ j = J \\}\n", "$$\n", "\n", "$$\n", "\\bar{e}_{J}= \\frac{1}{\\sum_{j,t} \\mathbf{1}\\{ j = J \\} } \\sum_{j,t} e_{jt} \\mathbf{1}\\{ j = J \\}\n", "$$\n", "\n", "$$\n", "\\bar{\\Psi}_{J} = \\Psi_{J}\n", "$$\n", "\n", "Notice that the mean of $\\Psi_{j}$ does not change over time for a fixed\n", "household $j$. Hence, we can subtract the two household level means from\n", "the original equation to get:\n", "\n", "$$\n", "y_{jt} - \\bar{y}_{j} = e_{jt} - \\bar{e}_{j} + \\underbrace{ \\Psi_{j} - \\bar{\\Psi}_{j} }_\\text{equals zero!}\n", "$$\n", "\n", "Therefore, we are able to get rid of the unobserved heterogeneity in\n", "household determinants of income via “de-meaning”! This is called a\n", "within-group or fixed-effects transformation. If we believe these types\n", "of unobserved errors/shocks are creating endogeneity, we can get rid of\n", "them using this powerful trick. In some cases, we may alternatively\n", "choose to do a first-difference transformation of our regression\n", "specification. This entails subtracting the regression in one period not\n", "from it’s expectation across time, but from the regression in the\n", "previous period. In this case, time-invariant characteristics are\n", "similarly removed from the regression since they are constant across all\n", "periods $t$.\n", "\n", "## 15.7 Wrap Up\n", "\n", "In this module, we’ve learned how to address linear regression in the\n", "case where we have access to two dimensions: cross-sectional variation\n", "and time variation. The usefulness of time variation is that it allows\n", "us to control for time-invariant components of the error term which may\n", "be causing endogeneity. We also investigated different ways for\n", "addressing problems such as heteroskedasticity and autocorrelation in\n", "our standard errors when working specifically with panel data. In the\n", "next module, we will cover a popular research design method:\n", "difference-in-differences.\n", "\n", "## 15.8 Wrap-up Table\n", "\n", "| Command | Function |\n", "|--------------------------------|----------------------------------------|\n", "| `xtset panelvar timevar, interval` | It tells Stata that we are working with panel data, as well as which variables are our panel variable, time variable, and what at what interval the data was recorded. |\n", "| `xtreg depvar indepvar` | It runs a panel regression. We can add options to this, such as `fe` for fixed-effects, and `re` for random-effects. |\n", "| `hausman model1 model2` | It performs the Hausman test on `model1` and `model2` to determine which more accurately models our data. |\n", "| `testparm i.varname` | It evaluates whether multiple coefficients are equal to zero. |\n", "| `Lnumber.variable` | It creates a lagged variable. |\n", "| `Fnumber.variable` | It creates a lead variable. |\n", "| `xttest3` | It calculates a modified Wald statistic to test for heteroskedasticity. |\n", "| `xtpcse depvar indepvar, het` | It calculates a GLS regression to deal with heteroskedasticity, following the Prais-Winsten method. We can add `corr(ar1)` to account for serial correlation. |\n", "| `xtserial depvar indepvar` | It conducts a Woolridge test for autocorrelation. |\n", "| `xtgcause depvar indepvar` | It conducts a Granger Causality test for reverse causality. |\n", "\n", "## References\n", "\n", "[Formatting and managing\n", "dates](https://www.youtube.com/watch?v=SOQvXICIRNY&t=149s)
\n", "[Time-series operators\n", "(lags)](https://www.youtube.com/watch?v=ik8r4WvrPkc&t=224s)" ], "id": "e5ef6077-bdb2-40f0-b6a1-9ed418238680" } ], "nbformat": 4, "nbformat_minor": 5, "metadata": { "kernelspec": { "name": "python3", "display_name": "Python 3 (ipykernel)", "language": "python", "path": "/usr/local/share/jupyter/kernels/python3" }, "language_info": { "name": "python", "codemirror_mode": { "name": "ipython", "version": "3" }, "file_extension": ".py", "mimetype": "text/x-python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } } }