# loading in our packages
library(tidyverse)
library(haven)
library(digest)
source("beginner_distributions_tests.r")
1.6 - Beginner - Distributions
Outline
Prerequisites
- Introduction to Jupyter
- Introduction to R
- Introduction to Visualization
- Central Tendency
Outcomes
After completing this notebook, you will be able:
- Understand and work with Probability Density Functions (PDFs) and Cumulative Density Functions (CDFs)
- Use tables to find joint, marginal, and conditional probabilities
- Interpret uniform, normal, and \(t\) distributions
References
Introduction
This notebook will explore the concept of distributions, both in terms of their functional forms for probability and how they represent different sets of data.
Let’s first load the 2016 Census from Statistics Canada, which we will consult throughout this lesson.
# reading in the data
<- read_dta("../datasets_beginner/01_census2016.dta")
census_data
# cleaning up factors
<- as_factor(census_data)
census_data
# cleaning up missing data
<- filter(census_data, !is.na(census_data$wages))
census_data <- filter(census_data, !is.na(census_data$mrkinc))
census_data
# inspecting the data
glimpse(census_data)
Now that we have our data set ready on stand-by for analysis, let’s start looking at distributions as a concept more generally.
Part 1: Introduction to Concepts in Probability
What is a Probability?
The probability of an event is a number that indicates the likelihood of that event happening.
When the possible values of a certain event are discrete (e.g., 1,2,3
or adult, child
), we refer to this as the frequency.
When the possible values are continuous (e.g., any number between 0.5
and 3.75
), we refer to this as the density.
There is a difference between population probabilities and empirical or sample probabilities. Generally, when we talk about distributions we will be referring to population objects: but there are also sample versions as well, which are often easier to think about.
For instance, let’s say we have a dataset with 5,000 observations and a variable called birthmonth
which records the month of birth of every participant captured in the dataset. If 500 people in the data were born in October, then birthmonth == "October"
would have an empirical probability of occurring in an observation 10% of the time. We can’t be sure what the population probability would be, unless we knew more about the population.
What is a Random Variable?
A random variable is a variable whose possible values are numerical outcomes of a random phenomenon, such as rolling a dice. A random variable can be either discrete or continuous.
A discrete random variable is one which may take on only a finite number of distinct values (e.g., the number of children in a family).
- In this notebook we’ll see that
agegrp
is an example of a discrete variable.
- In this notebook we’ll see that
A continuous random variable is one which takes an infinite number of possible values and can be measured rather than merely categorized (e.g., height, weight, or how much people earn).
- In the data, we can see that
wages
andmrkinc
are great examples of continuous random variables.
- In the data, we can see that
What is a Probability Distribution?
A probability distribution refers to the pattern or arrangement of probabilities in a population. These are usually described as functions used to indicate the probability of that event occurring. As we explained above, there is a difference between population and sample distributions:
A population distribution (which is the typical way we describe these) describes population probabilities
An empirical or sample distribution reports empirical probabilities from within a particular sample
Note: we typically use empirical distributions as a way to learn about the population distributions, which is what we’re primarily interested in.
Distribution functions come in several standard forms; let’s learn about them.
Part 2: Distribution Functions of Single Random Variables
Probability Density Functions (PDFs)
Probability Density Functions are also sometimes referred to as PDFs or probability mass functions. We usually use lower case letters like \(f\) or \(p\) to describe these functions.
Discrete PDFs
“The probability distribution of a discrete random variable is the list of all possible values of the variable and their probabilities which sum to 1.” - Econometrics with R
A PDF, also referred to as density or frequency, is the probability of occurrence of all the different values of a variable.
Suppose a random variable \(X\) may take \(k\) different values, with the probability that \(X = x_{i}\) defined to be \(\mathrm{P}(X = x_{i}) = p_{i}\). The probabilities \(p_{i}\) must satisfy the following:
- For each \(i\), \(0 < p_{i} < 1\)
- \(p_{1} + p_{2} + ... + p_{k} = 1\)
We can view the empirical PDF of a discrete variable by creating either a frequency table or a graph.
Let’s start by creating a frequency table of age groups using the variable agegrp
in our census_data
.
<- filter(census_data, agegrp != "not available") # filter out NAs
census_data_pdf
<- census_data_pdf %>%
table_1 group_by(agegrp) %>% # for every age group
summarize(count = n()) %>%
mutate(frequency = count/sum(count)) # calculate the frequency with which they occur
table_1
Now let’s try creating a graph to show the data. PDFs are best visualized with histograms. To show a histogram with the probabilites in the y-axis we use the function geom_bar
.
Refer to Introduction to Visualization for a refresher.
<- ggplot(data = table_1, aes(x = agegrp, y = frequency)) +
plot_pdf geom_bar(stat = 'identity') + # specify identity to plot the values in frequency
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
plot_pdf
Continuous PDF:
“Since a continuous random variable takes on a continuum of possible values, we cannot use the concept of a probability distribution as used for discrete random variables.” - Econometrics with R
Unlike a discrete variable, a continuous random variable is not defined by specific values. Instead, it is defined over intervals of values, and is represented by the area under a curve (in Calculus, that’s an integral).
The curve, which represents the probability function is also called a density curve and it must satisfy the following:
- The curve must have no negative values \(p(x) > 0\) for all \(x\) (the probability of observing a value can’t be negative)
- The total area under the curve must be equal to 1
Let’s imagine a random variable that can take any value over an interval of real numbers. The probability of observing values between \(a\) and \(b\) is the area below the density curve for the region defined by \(a\) and \(b\):
\[ \mathrm{P}(a \le X \le b) = \left(\int_{a}^{b} f(x) \; dx\right) \]
Since the number of values which may be assumed by the random variable is infinite, the probability of observing any single value is equal to 0.
Example: If we take height as a continuous random variable, the probability of observing an exact height (e.g., exactly 173.4827 or exactly 187.19283 centimeters) is zero because the number of values which may be assumed by the random variable is infinite.
We will use graphs to visualize continuous PDFs rather than tables, as we need to visualize the entire continuum of possible values to be represented in the graph. Since the probability of observing values between \(a\) and \(b\) is the area underneath the curve, a continuous PDF should be visualized as a line graph instead of bar graphs or scatterplots.
Suppose we would like to visualize a continuous empirical PDF for all wages between 25000 and 75000:
<- density(census_data$wages)
density plot(density)
# telling R how to read our upper and lower bounds
<- min(which(density$x >= 25000))
l <- max(which(density$x < 75000))
h
# visualizing our specified range in red
polygon(c(density$x[c(l, l:h, h)]),
c(0, density$y[l:h], 0),
col = "red")
Cumulative Density Functions (CDFs)
When we have a variable which is rankable, we can define a related object: the Cumulative Density Function (CDF).
The CDF for both discrete and continuous random variables is the probability that the random variable is less than or equal to a particular value.
Hence, the CDF must necessarily be an increasing function. Think of the example of rolling a dice:
- \(F(1)\) would indicate the the probability of rolling 1
- \(F(2)\) would indicate the probability of rolling 2 or lower
- Evidently, \(F(2)\) would be greater than \(F(1)\)
A CDF can only take values between 0 and 1.
- 0 or (0%) is the probability that the random variable is less or equal to the smallest value of the variable
- 1 or (100%) is the total probability that the random variable is less or equal to the biggest value of the variable
Therefore, if we have a variable \(X\) that can take the value of \(x\), the CDF is is the probability that \(X\) will take a value less than or equal to \(x\).
Since we use the lowercase \(f(y)\) to represent the PDF of \(y\), we use the uppercase \(F(y)\) to represent the CDF of \(y\). Mathematically, since \(f_{X}(x)\) denotes the probability density function of \(X\), then the probability that \(X\) falls between \(a\) and \(b\) where \(a \leq b\) is:
\[ \mathrm{P}(a \leq X \leq b) = \left(\int_{a}^{b} f_{X}(x) \; dx\right) \]
We know that the entire \(X\) variable falls between 2 values if the probability of \(x\) falling in between them is 1. Therefore a CDF curve for \(X\) is:
\[ \mathrm{P}(−∞ \le X \le ∞) = \left(\int_{−∞}^{∞} f_{X}(x) \; dx\right) = 1 \]
Below we’ve used a scatter plot to visualize empirical CDF of the continuous variable wages
. That graph indicates that most people earn between 0 and 200000 as the probability of wages being less than or equal to 200000 is over 90%.
# calculate CDF
<- ecdf(census_data$wages)
p
# plot CDF
plot(p)
Part 3: Distribution Functions of Multiple Random Variables
So far, we’ve looked at distributions for single random variables. However, we can also use joint distributions to analyze the probability of multiple random variables taking on certain values.
Joint Probability Distribution
In this case, the joint distribution is the probability distribution on all possible values that \(X\) and \(Y\), can take on.
Lets suppose both \(X\) and \(Y\) are discrete random variables which can take on values from 1-3. We show the joint probability table (\(X\) on vertical axis, and \(Y\) on horizontal) below:
\(X = 1\) | \(X = 2\) | \(X = 3\) | |
---|---|---|---|
\(Y = 1\) | 1/6 | 1/6 | 1/12 |
\(Y = 2\) | 1/12 | 0 | 1/12 |
\(Y = 3\) | 1/4 | 1/6 | 0 |
The chart shows the probability that \(X\) and \(Y\) take on certain values. For example, the third row of the first column states that \(\mathrm{P}(X = 3, Y = 1) = 1/4\).
Notice that the probabilities sum to \(1\).
Every joint distribution can be represented by a PDF and CDF, just like single random variables. The formal notation of a PDF for two jointly distributed random variables is
\[f(x, y) = \mathrm{P} (X = x, Y = y)\]
where \(f(x, y)\) is the joint probability density that the random variable \(X\) takes on a value of \(x\), and the random variable \(Y\) takes on a value of \(y\).
The CDF for jointly distributed random variables follows the same logic as with single variables though this time it represents the probability of multiple variables taking on values less than those specified all at once.
This might not make sense for two discrete random variables but it is useful if both variables are continuous. The formal notation of a CDF for two jointly distributed random variables is
\[ F(x, y) = \mathrm{P}({X \leq x}, {Y \leq y}) \]
where \(F(x, y)\) is the joint cumulative probability that the random variable \(X\) takes on a value less than or equal to \(x\) and the random variable \(Y\) takes on a value less than or equal to \(y\) simultaneously.
Marginal Probability Distribution
The marginal distribution is the probability density function for each individual random variable. If we add up all of the joint probabilities from the same row or the same column, we get the probability of one random variable taking on a series of different values. We can represent the marginal probability density function as follows:
\[ f_{x}(x) = \sum_{y} \mathrm{P}(X = x, Y = y) \]
where we sum across all possible joint probabilities of \(X\) and \(Y\) for a given \(x\) or \(y\).
If we wanted the marginal empirical probability distribution function of \(X\), we would need to find the marginal probability for all possible values of \(X\).
The marginal probability \(X = 1\) from our example above is simply the probability that \(X\) takes on \(1\) for every possible value of \(Y\): \[ \mathrm{P}(X = 1, Y = 1) + \mathrm{P}(X = 1, Y = 2) + \mathrm{P}(X = 1, Y = 3) = 1/6 + 1/12 + 1/4 = 1/2 \]
One important point to consider is that of statistical independence of random variables.
- Two random variables are independent if and only if their joint probability of occurrence equals the product of their marginal probabilities for all possible combinations of values of the random variables.
- In mathematical notation, this means that two random variables are statistically independent if and only if:
\[ f(x, y) = f_{x}(x) f_{y}(y) \]
Think Deeper: Can you tell whether the two random variables in our example are statistically independent?
Conditional Probability Distribution
The conditional distribution function indicates the probability of seeing a host of values for one random variable conditional on a specified value of another random variable, provided that the two random variables are jointly distributed.
Below is the formula of a probability density function of random variables \(X\) and \(Y\):
\[ f(x | y) = \frac {\mathrm{P} (X = x \cap Y = y)} {\mathrm{P}(Y = y)} \]
where
- \(f(x | y)\) represents the conditional probability that the random variable \(X\) will take on a value of \(x\) when the random variable of \(Y\) takes on a value of \(y\)
- \(\cap\) represents the case that both \(X\) = \(x\) and \(Y\) = \(y\) simultaneously (a joint probability)
Note: the marginal probability that \(Y = y\) must not be 0 as that would make the conditional probability undefined.
Let’s say we want to find the conditional probability of \(X = 1\) given \(Y = 2\), using the joint probability table in our example above. To find that we need to first find \(\mathrm{P}(Y = 2)\) and then divide \(\mathrm{P}(X = 1, Y = 2)\) by that number. We get: \((1/12) \div (1/12 + 1/12) = 1/2\).
Until now, we have referred to the joint, marginal and conditional distribution of two discrete random variables; however, the logic extends to continuous variables.
We focused on discrete random variables since they are much easier to represent in table format. While the same logic for discrete variables applies to continuous random variables, we often refer to mathematical formulas when finding the marginal and conditional probability functions for continuous random variables, since their PDFs and CDFs can be represented by mathematical functions.
Note: we can also have more than two jointly distributed random variables. While it is possible to represent the probability of 3 or more variables taking on certain values at once, it is hard to represent that graphically or in table format. That is why we have stuck to investigating two jointly distributed random variables in this notebook.
Test your knowledge
Let the random variable \(X\) denote the time (in hours) Omar can wait for his flight. Omar could have to wait up to 2 hours for this flight. Use this information to answer questions 1, 2, and 3 below.
Is \(X\) a discrete or continuous random variable?
# your answer of "discrete" or "continuous" in place of ...
<- "..."
answer_1
test_1()
Say a potential probability density function representing this random variable is the following:
\[ f(x) = \begin{cases} x & \text{if } 0 \leq x \leq 1,\\ 2 - x & \text{if } 1 \leq x \leq 2,\\ 0 & \text{otherwise} \end{cases} \]
Is this a valid PDF?
# your answer of "yes" or "no" in place of ...
<- "..."
answer_2
test_2()
What is the probability of a person waiting up to 1.5 hours for their flight? Answer to 3 decimal places.
Hint: this is not the same as the probability of waiting precisely 1.5 hours.
# your code here
# your answer for the cumulative probability (in decimal format, i.e. 95% = 0.95) here
<- ...
answer_3
test_3()
Now let’s change gears and look at the joint distribution of discrete random variables immstat
(rows) and kol
(columns).
\(1\) | \(2\) | \(3\) | |
---|---|---|---|
\(1\) | 1/4 | 1/6 | 1/6 |
\(2\) | 1/5 | 1/5 | 1/60 |
\(3\) | 0 | 0 | 0 |
Use the following legend to answer the questions 4 and 5 below:
immstat
takes values 1 == non-immigrant; 2 == immigrant; 3 == NAkol
takes values 1 == english only; 2 == french only; 3 == both french and english
What is the probability that someone is both an immigrant and knows both English and French? Answer in fractional from.
# your answer for the probability (in fractional format, i.e. 10% = 1/10) here
<- ...
answer_4
test_4()
What is the probability that someone is an immigrant given that they know only English? Answer in fractional from.
# your answer for the probability (in fractional format, i.e. 10% = 1/10) here
<- ...
answer_5
test_5()
Let \(Y\) be a continuous random variable uniformly distributed on the range of values [20, 80]. Use this information to answer questions 6, 7, and 8.
What is the probability of \(Y\) taking on the value of 30? You may use a graph to help you.
# your code here
# your answer for the probability (in fractional format, i.e. 25% = 1/4) here
<- ...
answer_6
test_6()
What is the probability of \(Y\) taking on a value of 60 or more?
# your answer for the probability (in fractional format, i.e. 25% = 1/4) here
<- ...
answer_7
test_7()
If the range of \(Y\) expanded to [20, 100], would the probability that \(Y\) takes on a value of 60 or more increase or decrease?
# your answer of "increase" or "decrease" in place of "..."
<- "..."
answer_8
test_8()
Now, let \(Z\) be a normally distributed random variable representing the length of a piece of classical music (in minutes), with a mean of 5 and standard deviation of 1.5. Use this information to answer questions 9, 10 and 11.
What is the probability that a given piece will last between 3 and 7 minutes? Answer to 4 decimal places. You may use code to help you.
# your code here
# your answer for the probability (in decimal format, i.e. 95.23% = 0.9523) here
<- ...
answer_9
test_9()
If \(Z\) were to remain normally distributed and have the same standard deviation, but the mean piece length was changed to 3 minutes, how would this probability change?
# your answer of "increase" or "decrease" in place of "..."
<- "..."
answer_10
test_10()
Returning to our original \(Z\) variable (with mean 5), if the standard deviation were to decrease to 1, how would the original probability change?
# your answer of "increase" or "decrease" in place of "..."
<- "..."
answer_11
test_11()
Part 4: Parametric Distributions
All of the examples we used so far were for empirical distributions since we didn’t know what the population distributions were. However, many statistics do have known distributions which are very important to understand.
Let’s look at the three most famous examples of distributions:
- uniform distribution
- normal (or Gaussian) distribution
- student \(t\)-distribution
These are called parametric distributions because they can be described by a set of numbers called parameters. For instance, the normal distribution’s two parameters are the mean and standard deviation.
All the parametric distributions explained in this module are analyzed using four R commands. The four commands will start with the prefixes:
d
for “density”: it produces the probability density function (PDF)p
for “probability”: it produces the cumulative distribution function (CDF)q
for “quantile”: it produces the inverse cumulative distribution function, also called the quantile functionr
for “random”: generates random numbers from a particular parametric distribution
Uniform Distribution
A continuous variable has a uniform distribution if all values have the same likelihood of occurring.
An example of a random event with a uniform distribution is rolling a dice: it is equally likely to roll any of the six numbers.
The variable’s density curve is therefore a rectangle, with constant height across the interval and 0 height elsewhere.
Since the area under the curve must be equal to 1, the length of the interval determines the height of the curve.
Let’s see how this kind of distribution might look like.
- First, we will generate random values from this distribution using the function
runif()
. - This command is written as
runif(n, min = , max = )
, wheren
is the number of observations, andmax
andmin
provide the interval between which the random variables are picked from.
<- runif(10000, min = 10, max = 100)
example_unif hist(example_unif, freq = FALSE, xlab = 'x', xlim = c(0,100), main = "Empirical PDF for uniform random values on [0,100]")
While each number within the specified range is equally likely to be drawn, by random chance, some ranges of numbers are drawn more frequently others, hence the bars are not all the exact same height. The shape of the distribution will change each time you re-run the previous code cell.
If we know the underlying distribution, we can infer many characteristics of the data. For instance, suppose we have a uniform random variable \(X\) defined on the interval \((10,50)\).
Since the interval has a width of 40, the curve must have a height of \(\frac{1}{40} = 0.025\) over the interval and 0 elsewhere. The probability that \(X \leq 25\) is the area between 10 and 25, or \((25-10)\cdot 0.025 = 0.375\).
The dunif()
function calculates the uniform probability density function for a variable and can also calculate a specific value’s density.
<- seq(0, 100, by = 1) # creating a variable with a uniform distribution
range <- dunif(range, min = 10, max = 50) # calculating the PDF of the variable "range"
ex.dunif plot(ex.dunif, type = "o") # plotting the PDF
CDF
The punif()
function calculates the uniform cumulative distribution function for the set of values.
<- punif(range, # vector of quantiles
x_cdf min = 10, # lower limit of the distribution (a)
max = 50, # upper limit of the distribution (b)
lower.tail = TRUE, # if TRUE, probabilities are P(X <= x); if FALSE P(X > x)
log.p = FALSE) # if TRUE, probabilities are given as log
plot(x_cdf, type = "l")
The qunif()
function calculates, based on the cumulative probability, where a specific value is located in the distribution of density and helps us access the quantile distribution probability values from the data.
<- seq(0, 1, by = 0.01)
quantiles <- qunif(quantiles, min = 10, max = 50)
y_qunif plot(y_qunif, type = "l")
Normal (Gaussian) Distribution
We first saw the normal distribution in the Central Tendency notebook. The normal distribution is fundamental to many statistic processes as many random variables in natural and social sciences are normally distributed (e.g, height and SAT scores both follow a normal distribution). We refer to this type of distribution as “normal” because it is symmetrical and bell-shaped.
A normal distribution is parameterized by its mean \(\mu\) and its standard deviation \(\sigma\), and it is expressed as \(N(\mu,\sigma)\). We cannot calculate the normal distribution without knowing the mean and the standard deviation.
The PDF has a complex equation, which can be written as:
\[ f(x; \mu, \sigma) = \displaystyle \frac{x^{-(x-\mu)^{2}/(2\sigma^{2})}}{\sigma\sqrt{2\pi}} \]
A standard normal distribution is a special normal distribution: it has a mean equal to zero and a standard deviation equal to 1 (\(\mu=0\) and \(\sigma=1\)), hence we can denote it \(N(0,1)\). A couple of notation points to keep in mind include:
- Standard normal variables are often denoted by \(Z\)
- Standard normal PDF is denoted by \(\phi\)
- Standard normal CDF is denoted by \(\Phi\)
To generate simulated normal random variables, we can use the rnorm()
function, which is similar to the runif()
function.
<- rnorm(10000, # number of observations
x mean = 0, # mean
sd = 1) # sd
hist(x, probability=TRUE) # the command hist() creates a histogram using variable x
<- seq(min(x), max(x), length=100)
xx lines(xx, dnorm(xx, mean=0, sd=1))
As with the uniform distribution, we can use dnorm
to plot the standard normal pdf.
# create a sequence of 100 equally spaced numbers between -4 and 4
<- seq(-4, 4, length=100)
x
# create a vector of values that shows the height of the probability distribution
# for each value in x
<- dnorm(x)
y
# plot x and y as a scatterplot with connected lines (type = "l") and add
# an x-axis with custom labels
plot(x,y, type = "l", lwd = 2, axes = FALSE, xlab = "", ylab = "")
axis(1, at = -3:3, labels = c("-3s", "-2s", "-1s", "mean", "1s", "2s", "3s"))
We have used the random values generated to observe its bell shaped distribution. This is a standard normal PDF because the mean is zero and the standard deviation is one.
We can also change the numbers of mean and sd in the rnorm()
command to make the distribution not standard.
dnorm()
gives the height of the probability distribution at each point for a given mean and standard deviation. Since the height of the pdf curve is the density, dnorm()
can also be used to calculate the entire density curve, as observed in the command lines(xx, dnorm(xx, mean=0, sd=1)) in the previous section.
dnorm(100, mean=100, sd=15)
CDF
The pnorm()
function can (1) give the entire CDF curve of a normally distributed random variable (2) give the probability of a specific number from that variable to be less than the value of a given number.
curve(pnorm(x),
xlim = c(-3.5, 3.5),
ylab = "Probability",
main = "Standard Normal Cumulative Distribution Function")
pnorm(27.4, mean=50, sd=20) # gives you the CDF at that specific location
The qnorm()
function can create a percent point function (ppf), which is the inverse curve of the cumulative distribution function. The qnorm()
function gives the inverse of the CDF by taking the density value and giving a number with a matching cumulative value.
- The CDF of a specific value is the probability of a normally distributed value of a random variable to be less than the value of a given number
- To create the ppf, we start with that probability and use the
qnorm()
function to compute the corresponding given number for the cumulative distribution
curve(qnorm(x),
xlim = c(0, 1),
xlab = "Probability",
ylab = "x",
main = "Quantile (inverse CDF) Function")
qnorm(0.84, mean=100, sd=25)
Think Deeper: The output of the function above shows that the 84th quantile is approximately 1 standard deviation to the right of the mean. Do you recognize this property of normally distributed random variables?
Student’s \(t\)-Distribution
The Student’s t-distribution is a continuous distribution that occurs when we estimate the sampling distribution of a normally distributed population with a small sample size and an unknown standard deviation. This is an important concept that we will explore in a future notebooks.
The \(t\)-distribution is based on the number of observations and the degrees of freedom.
A degree of freedom (\(\nu\)) is the maximum number of logically independent values. You can think of it as the number of values that need to be known in order for the remaining values to be determined. For example, let’s say you have 3 data points and you know that their average value is 5. If you randomly select two of the values (let’s say, 4 and 5) even without sampling the last data point, you know that its value needs to be 6. Hence, there is “no freedom” in the last data point.
In the case of the \(t\)-distribution, the degree(s) of freedom are calculated as \(\nu = n-1\), with \(n\) being the sample size.
When \(\nu\) is large, the \(t\)-distribution begins to look like a standard normal distribution. This approximation between standard normal and \(t\)-distribution can start to be noticed around \(\nu \geq 30\).
As with the uniform and normal distribution, to generate random values that together have a \(t\)-distribution we add the prefix r
to the name of the distribution, rt()
.
<- 100
n <- n - 1
df
<- rt(n, df)
samples hist(samples, breaks = 20, freq = FALSE)
<- seq(min(samples), max(samples), length=100)
xx lines(xx, dt(xx, df))
Although the t-distribution is bell-shaped and symmetrical like the normal distribution, it is not as thin as a normal distribution. Hence, the data is more spread out than a normal distribution.
Note: this is explained by the central limit theorem (CLT) and the law of large numbers (LLN), which we will explore in future notebooks.
The function dt()
calculates the PDF or the density of a particular variable, depending on the sample size and degrees of freedom.
In the examples shown below we use the variable ex.tvalues
which is a sequence of numbers ranging from -4 to 4 with increments of 0.01. Therefore there are 800 numbers generated with the degrees of freedom of 799.
<- seq(- 4, 4, by = 0.01) # generating a sequence of number
ex.tvalues <- dt(ex.tvalues, df = 799) # calculating the PDF
ex_dt plot(ex_dt, type="l")
CDF
The pt()
function calculates the entire CDF curve of a t-distributed random variable and gives the probability of a t-distributed random number that is less than value of a given number.
<- pt(ex.tvalues, df = 799) # calculating CDF
ex_pt plot(ex_pt, type = "l")
The qnorm()
function takes the probability value and gives a number whose cumulative value matches the probability value. This function can also create a percent point function (ppf).
<- seq(0, 1, by = 0.01) # generating a sequence of number
ex.qtvalues <- qt(ex.qtvalues, df = 99) # calculating the ppf
ex_qt plot(ex_qt, type = "l") # plotting the ppf
Beyond these three common distributions, there are many other types of distributions, such as the chi-square and f distributions. In some cases we may also have variables that do not fit any common distribution. In those cases, we describe those distributions as non-parametrical.
Test your knowledge
Which of the following random variables are most likely to be uniformally distributed?
- The height of a UBC student
- The wages of a UBC student
- The birthday of a UBC student
# enter your answer here as "A", "B", or "C"
<- "..."
answer_12
test_12()
Which of the following random variables are most likely to be normally distributed?
- The height of a UBC student
- The wages of a UBC student
- The birthday of a UBC student
# enter your answer here as "A", "B", or "C"
<- "..."
answer_13
test_13()
Given our uniform distribution example_unif
, find \(F(72)\).
Hint: you don’t need to calculate the exact probability given the distribution. You only need to know that this random variable is uniformly distributed for values between 10 and 100.
# your code here
# enter your answer as a fraction below
<- ...
answer_14
test_14()
Assume we have a standard normal distribution \(N(0,1)\). Find \(F(0)\).
# enter your answer as a fraction below
<- ...
answer_15
test_15()
<- 1/2
answer_15
test_15()
Let’s assume we have a student’s \(t-\)distribution that approximates a normal distribution really well. What must be true?
- The degrees of freedom parameter must be very large
- The degrees of freedom parameter must be very small
- The degrees of freedom parameter must be equal to the mean of the normal distribution
# Enter your answer here as "A", "B", or "C"
<- "..."
answer_16
test_16()