```
library(haven)
library(tidyverse)
library(IRdisplay)
<- read_dta("../econ490-stata/fake_data.dta") # change me! fake_data
```

# ECON 490: Conducting Within Group Analysis (6)

## Prerequisites

- Inspect and clean the variables of a data set.
- Generate basic variables for a variety of purposes.

## Learning Outcomes

- Use
`arrange`

,`group_by`

,`group_keys`

and`ungroup`

to sort and organize data for specific purposes. - Generate variables with
`summarize`

to analyze patterns within groups of data. - Reshape data frames using
`pivot_wider`

and`pivot_longer`

.

## 6.1 Key Functions for Group Analysis

When we are working on a particular project, it is often quite important to know how to summarize data for specific groupings, whether of variables or observations meeting specific conditions. In this notebook, we will look at a variety of functions for conducting this group-level analysis. We will rely heavily on the `dyplr`

package, which we have implicitly imported through the `tidyverse`

package. Let’s import these packages and load in our “fake_data” now. Recall that this data set is simulating information of workers in the years 1995-2012 in a fake country where a training program was introduced in 2003 to boost their earnings.

Now that we’ve loaded in our data and already know how to view it, clean it, and generate additional variables for it as needed, we can look at some helpful commands for grouping this data.

#### 6.1.1 `arrange`

Before grouping data, we may want to order our data set based on the values of a particular variable. The `arrange`

function helps us achieve this. It takes in a data frame and variable and rearranges our data frame in ascending order of the values of that variable, with the option to arrange in descending order requiring a further `desc`

function. As an example, let’s rearrange our entire data set in order of the variable *year*.

```
# arrange the dataframe by ascending year
%>% arrange(year)
fake_data
# arrange the dataframe by descending year
%>% arrange(desc(year)) fake_data
```

We can also pass multiple variable parameters to the `arrange`

function to indicate how we should further sort our data within each year grouping. For instance, including the *region* variable will further sort each year grouping in order of region.

`%>% arrange(year, region) fake_data `

#### 6.1.2 `group_by`

This is one of the most pivotal functions in R. It allows us to group a data frame by the values of a specific variable and perform further operations on those groups. Let’s say that we wanted to group our data set by *region* and count the number of observations in each region. To accomplish this, we can simply pass this variable as a parameter to our `group_by`

function and further pipe this result into the `tally`

function.

`%>% group_by(region) %>% tally() fake_data `

Notice how the `group_by`

function nicely groups the regions in ascending order for us automatically. Unlike with the `arrange`

function, it does not preserve the data set in its entirety. It instead collapses our data set into groups, thus it is important not to redefine our “data” data frame by this group_by if we want to preserve our original data.

We can also pass multiple arguments to `group_by`

. If we pass both *region* and *treated* to our function as inputs, our region groups will be further grouped by observations which are and are not treated. Let’s count the number of treated and untreated observations in each region.

`%>% group_by(region, treated) %>% tally() fake_data `

Finally, we can pipe a group_by object into another group_by object. In this case, the second group_by will simply overwrite the first. For example, if we wanted to pass our original region group_by into a mere treated group_by, we get as output a data frame counting the total number of observations that are treated and untreated.

`%>% group_by(region) %>% group_by(treated) %>% tally() fake_data `

#### 6.1.3 `group_keys`

This function allows us to see the specific groups for a group_by data frame we have created. For instance, if we wanted to see every year in the data, we could group by *year* and then apply the `group_keys`

function.

`%>% group_by(year) %>% group_keys() fake_data `

This is equivalent to using the `unique`

function directly on a column of our data set like below.

`unique(fake_data$year)`

The output is just a list in this case instead of another data frame as above.

#### 6.1.4 `ungroup`

We can even selectively remove grouping variables from a grouped data frame. Say we grouped by *region* and *treated* but then wanted to just count how many treated groups there are by *region*. If this double grouped data frame is defined as A, we can simply use `ungroup`

to remove the grouping by treatment status.

```
<- fake_data %>% group_by(region, treated) %>% tally()
A A
```

`%>% ungroup(treated) %>% tally() A `

We may also be interested in knowing how many groupings we have created. We can remove all grouping variables by leaving the input of `ungroup()`

empty.

`%>% ungroup() %>% tally() A `

## 6.2 Generating Variables for Group Analysis

We have already seen how to redefine and add new variables to a data frame using the df$ <- format. We have also seen how to use the `mutate`

function to add new variables to a data frame. However, we often want to add new variables to grouped data frames to display information about the different groups rather than different observations in the original data frame. That is where `summarise`

comes in.

The `summarise`

function gives us access to a variety of common functions we can use to generate variables corresponding to groups. For instance, we may want to find the mean earnings of each region. To do this, we can group on *region* and then add a variable to our grouped data frame which aggregates the mean of the *earnings* variable for each region group. We must use the `summarise`

function for this, since it gives us access to the earnings of every member of each group.

`%>% group_by(region) %>% summarise(meanearnings = mean(earnings)) fake_data `

We may want more detailed information about each region. We can pass a series of parameters to `summarise`

and it will generate variables for all of these requests. Let’s say we want the mean and standard deviation of *earnings* for each group, as well as the range of *earnings* for each group.

```
%>%
fake_data group_by(region) %>%
summarise(meanearnings = mean(earnings), stdevearnings = sd(earnings), range = max(earnings) - min(earnings))
```

We may also want to calculate the number of observations in each region as an additional variable. Before, we could simply group by our *region* variable and then immediately apply the `tally`

function. However, now that we have defined a series of other variables, our data set on which `tally`

operates is different. Watch what happens when we try to use `tally`

after using `summarise`

.

```
%>%
fake_data group_by(region) %>%
summarise(meanearnings = mean(earnings), stdevearnings = sd(earnings), range = max(earnings) - min(earnings)) %>%
tally()
```

Now watch what happens when we try to use `tally`

before using `summarise`

.

```
%>%
fake_data group_by(region) %>%
tally() %>%
summarise(meanearnings = mean(earnings), stdevearnings = sd(earnings), range = max(earnings) - min(earnings))
```

In the first case, tally does not have the necessary information left in the data frame to count the number of observations in each region. In the second case, tally has shrunk the data frame so much that the functions within `summarise`

do not have the necessary information to make their calculations.

This is where `n`

comes in. This is a special function used within the `summarise`

variable. It represents the number of observations within each group of a data frame. As such, it is directly paired with `group_by`

, although it can be paired with `mutate`

when we are working with the number of observations in a data set as a whole (i.e. with one group, meaning `n`

represents the position of each observation).

```
%>%
fake_data group_by(region) %>%
summarise(meanearnings = mean(earnings), stdevearnings = sd(earnings), range = max(earnings) - min(earnings), total = n())
```

The entire process of generating variables for group analysis in this section is similar to collapsing a data set in Stata. Luckily, it can be done more quickly here in R.

## 6.3 Reshaping Data

Sometimes in our process of data analysis, we want to restructure our data frame. To do this, we can take advantage of a series of functions within the `tidyr`

package that we have imported implicitly through loading in the `tidyverse`

package. These functions allow us to quickly change the format of our data frame without having to redefine all of its columns and rows manually.

For instance, we often want to transform our data from “wide” to “long” format, or vice versa. Suppose that we wish to make our data set more “cross-sectional” in appearance by dropping the age variable and adding an earnings variable for each year, with the values in these new columns corresponding to the earnings of each person in that year. Effectively, by adding columns, we are making our data set “wider”, so it is no surprise that the function is called `pivot_wider`

. It takes the following arguments: 1. `names_from`

: which columns to get the name of the output columns; 2. `values_from`

: which columns to get the cell values from.

```
<- fake_data %>% arrange(year) %>% select(-age) %>% pivot_wider(names_from = "year", values_from = "earnings")
wide_data head(wide_data)
```

We can see that the function above took the names from *year* and generated a new variable for each of them from 1995 to 2012, then supplied the corresponding values from *earnings* to each of these year variables. When a worker’s information isn’t recorded for a given year (and thus they have no recorded wage), the *earnings* variable is marked as missing.

We can pivot more than one variable. Instead of pivoting only the variable *year*, we can pivot both the variables *year* and *age*. We do so by specifying both variables in the `values_from`

argument.

`%>% arrange(year) %>% pivot_wider(names_from = "year" , values_from = c("earnings","age")) fake_data `

Now suppose we want to work backward and transform this data set back into its original, “longer” shape (just now without the *age* variable). To do this, we can invoke the complementary `pivot_longer`

function. The arguments we need to specify are: 1. the name of the columns we want to pivot to longer format (in our case, `cols='1995:'2011'`

); 2. the name of the new column that will be created from the information stored in the column names specified by `cols`

(in our case, `names_to="year"`

); 3. the name of the column to create from the data stored in cell values, `values_to="earnings"`

.

```
<- wide_data %>% pivot_longer(cols = '1995':'2011', names_to = "year", values_to = "earnings")
long_data head(long_data)
```

Remember that, when going from long to wide format, we created several missing values every time a worker information for a given year was not available. Now that we transform our data back from wide to long format, we may carry with us all those missing values we had created. We can ask R to automatically exclude them, by adding the option `values_drop_na = TRUE`

.

```
<- wide_data %>% pivot_longer(cols = '1995':'2011', names_to = "year", values_to = "earnings", values_drop_na = TRUE)
long_data_short head(long_data_short)
```

If this doesn’t seem intuitive or quickly comprehensible, don’t worry. Even many experienced coders struggle with the pivoting/reshaping functionality. With practice, it will become much more digestible!

## 6.4 Common mistakes

It is easy to forget that `group_by()`

creates a new data frame with a limited number of variables. Suppose we want to compute average earnings by region and treated status. We may try to do something like the following:

```
<- fake_data %>%
step1 group_by(region) %>%
summarise(meanearnings = mean(earnings))
<- step1 %>%
step2 group_by(treated) %>%
summarise(meanearnings = mean(earnings))
```

This results in an error: the first `group_by`

creates a new dataframe that does not contain the variable *treated* anymore. We can see that also by looking at the error message: *column ‘treated’ is not found*.

The right way of doing what we wanted is as follows:

```
%>%
fake_data group_by(region, treated) %>%
summarise(meanearnings = mean(earnings))
```

When we move from wide to long format, or vice versa, the variables that we *do not* pivot should remain constant over the variable that we pivot (namely, the variable we use in the `names_from`

argument).

Consider the example below. It is similar to what we did above but it has a crucial difference; can you spot it?

`%>% arrange(year) %>% pivot_wider(names_from = "year", values_from = "earnings") fake_data `

Earlier we dropped the variable *age*, while now we are keeping it. The variable *age* now is treated as if it was constant during *year*, the variable we are using for pivoting the data. This is not necessarily a mistake, and in fact R allows us to do the reshape. However, it changes the way in which we interpret *age*: it is now the age of the worker in their first year of appearance in the dataset.

## 6.5 Wrap Up

Being able to generate new variables and modify a data set to suit your specific research is pivotal. Now you should hopefully have more confidence in your ability to perform these tasks. Next, we will explore the challenges posed by working with multiple data sets at once.

In this module we have seen many new functions. Below you can find a short summary on each one of them:

Function | Description |
---|---|

`arrange` |
It orders observations based on the ascending or descending order of one or more variables. |

`group_by` |
It groups observations based on the values of one or more variables. It may be combined with `summarise` to compute summary statistics by group. |

`ungroup` |
It removes one or more grouping variables. |

`pivot_wider` |
It pivots data from long to wide format. |

`pivot_longer` |
It pivots data from wide to long format. |