STAT 205 Learning Outcomes
Legend
- π΄ Very Important
- π Important
- π‘ Kind of Important
- π’ Not Very Important
Anything with a strikeout will not be tested on the final exam.
1: Introduction to Statistics
Descriptive statistics
- π LO: able to accurately calculate and interpret key descriptive statistics, including:
- measures of central tendency (mean, median, mode) and
- measures of variability (range, variance, standard deviation).
- measures of location of data (quartiles and percentiles)
- π΄ LO: proficient in creating and analyzing graphical representations of data, such as
- histograms, box plots, and scatter plots, to summarize and describe data distributions effectively.
Principles of random sampling
- π LO: Understand the principles of probability sampling and how they form the basis for making statistical inferences from a sample to a population.
- π΄ LO: Understand difference between sample (of size \(n\)) and a population (either βinfiniteβ or size \(N\))
- π΄ LO: Understand difference between sample statistics (e.g. \(\bar x\), \(s\), \(\hat p\)) and a population parameters (e.g. \(\mu\), \(\sigma\), \(p\))
- π’ LO: Distinguish between different sampling designs (e.g. simple random, stratified, cluster)
- π‘ LO: Identify a target population
Types of data
- π΄ LO: Identify variables as numerical (continuous or discrete) and categorical (nominal or ordinal)
- π LO: Differentiate between variables that are associated (positive or negative) and those that are independent.
- π LO: Detect outliers in various types of data set using graphical methods (box plots, scatter plots)
- π’ LO: Understand the difference between observational studies and experiments
2: Summarizing Data
Storing Data
- π‘ LO: Describe and identify the basic data types in R (vectors, factors (ordered or unordered), lists, and data frames (with observations typically in rows, and variables in columns)
- π‘ LO: Data indexing: index vectors using
[]
and columns in a data frame using$
; see note1 - π’ LO: Understand the character data type (
character
orstring
) and logical data type in R (TRUE
or FALSE): - π‘ LO: Apply coercion in R to coerce data types in R to the appropriate data type (e.g. the
as.factor()
to convert a numeric vector to a factor. - π’ LO: Displaying data (using
str()
,head()
, orView()
- π LO: construct and interpret contingency tables (along with marginals) to organize and two categorical variables.
- π‘ LO: Define a robust statistic (e.g. median, IQR) as measures that are not heavily affected by skewness and extreme outliers, and determine when they are more appropriate measured of center and spread compared to other similar statistics.
Plotting Data
- π‘ LO: create simple plots using functions like
plot()
,hist()
,boxplot()
, etc - π’ LO: create advanced plots like stacked/side-by-side bar plots, side-by-side boxplots
- π’ LO: customize plot appearance by modifying attributes such as colors, labels, titles, axis limits, line types, etc.
- π’ LO: explore themes and packages (like
ggplot2
) for more advanced and polished visualizations. - π‘ LO: recognize and describe the common shapes of data distributions, including normal, skewed (right or left), uniform, and bimodal distributions.
- π LO: visually identify and interpret and estimate key statistical metrics such as the mean, mode, and interquartile range (IQR) from graphical representations including histograms and box plots.
3: Sampling Distributions
- π΄ LO: Explain the concept of a sampling distribution and its importance in statistical inference.
- π΄ LO: Define the Central Limit Theorem and its significance in statistical theory.
- π΄ LO: Understand the conditions under which the CLT applies and its implications for sample means and proportions
- π΄ LO: Define standard error as the standard deviation of a sampling distribution, representing the variability of sample statistics around the population parameter.
- π΄ LO: Explain the conceptual difference between standard error and standard deviation, emphasizing their respective roles in describing variability in populations and samples.
- π’ LO: Derive the sampling distributions for the sample mean, proportion, and variance
- π LO: Use the sampling distribution of sample statistic to create point estimates and confidence intervals.
- π LO: Apply knowledge of sampling distributions to the practical applications of hypothesis testing and constructing confidence intervals.
4: Getting Started with Quarto
- π΄ LO: Understand the advantages of using Quarto for reproducible document generation
- π΄ LO: Create
.qmd
documents using RStudio and demonstrate the ability to integrate:- executable code chunks, in-line code,
- embedded figures and images
- basic Markdown syntax (e.g. tables, headers, bold, italics, lists)
- executable code chunks, in-line code,
- π‘ LO: describe the key features of the YAML
- π’ LO: identify and use common keyboard shortcuts
- π‘ LO: navigate the RStudio interface proficiently and explain its major components, including the script editor, console, environment pane, and visualization tools.
- π‘ LO: understand and customize code chunk options (eg.
echo
option controls whether the code within a code chunk is displayed in the output document.) - π’ LO: LaTeX equations
- π LO: explain the importance of setting a seed in random number generation for reproducibility
- π‘ LO: demonstrate the ability to use the set.seed()
5: Likelihood and Parameter Estimation
- π΄ LO: Define, calculate, and identify point estimators
- π΄ LO: Define, construct, and interpret confidence intervals
- π LO: define and describe the Method of Moments for parameter estimation.
- π LO: use sample moments (such as sample means, variances, and higher moments) to estimate the parameters of a specified distribution.
- π’ LO: derive moment equations (moments will be provided on an exam if needed)
- π LO: Define likelihood (and log-likelihood) in the context of statistical inference.
- π LO: interpret the likelihood function as a tool for statistical inference and its difference from probability.
- π΄ LO: Derive a maximum likelihood estimator (MLE)
- π‘ LO: Define and explain common considerations in statistical estimation, including bias, consistency, efficiency, sufficiency and asymptotic normality.
6/7: Confidence Intervals for Means, Proportions, and Variance
- π LO: Understand what a pivotal quantity is and explain its role in statistical inference,
- π΄ LO: Construct a confidence interval given a particular confidence level; either in (a, b) form or in the form point estimate \(\pm\) margin of error.
- π΄ LO: Identify and compute a margin of error
- π΄ LO: Calculate the standard error for sample statistics using appropriate formulas.
- π΄ LO: Interpret a given confidence interval as the plausible range of values for a population parameter (e.g. \(\mu\), \(p\), or \(\sigma^2\)) in the context of probability and uncertainty βWe are XX% confident that the true population parameter is in this intervalβ, where XX% is the desired confidence level
- π LO: Understand and describe why we use βconfidenceβ instead of the term βprobabilityβ
- π LO: Determine appropriate sample sizes based on desired confidence levels of precision (margin of error).
- π LO: Check the assumptions required for using this method (e.g. success-failure check: \(np \geq 10\) and \(n(1-p) \geq 10\))
- π LO: Identify factors that influence the margin of error (effectively the width) of a confidence intervals, including sample size, confidence level, and population variability.
- π LO: Calculate the required minimum sample size for a given margin of error at a given confidence level
- π’ LO: Derive confidence intervals from the knowledge of sampling distributions and probability statements.
- π LO: Understand the relationship between confidence intervals and hypothesis testing.
- π LO: Use confidence intervals to make inferences about population parameters
- π΄ LO: Interpret QQ plots to assess the normality assumption
- π΄ LO: Extract critical values from Z-tables, t-tables and chi-square tables.
- π΄ LO: Approximate probabilities from Z-tables, t-tables and chi-square tables.
- π΄ LO: Demonstrate the ability to use functions like
qnorm()
,qt()
, andqchisq()
to find critical values
Non-parameter confidence intervals
π LO: Explain what nonparametric tests are and identify situations where they are more appropriate than parametric tests.
LO:
Construct a non-parameter confidence interval for the medianLO:
Construct a non-parameter confidence interval for the variance using resampling (bootstrap) methods
8: Sampling Distribution Theory
- π LO: Define sampling distribution and its significance in inferential statistics.
- π’
LO: Develop an understanding of how the CDF, PDF and MGF can be used to derive relationships between different types of random variables - π’
LO: Gain hands-on experience in proving the distribution characteristics of pivotal quantities
9: Sampling from Finite Populations
- π‘ LO: Define finite population sampling and its significance in survey research and applied statistics.
- π LO: Define a simple random sample (SRS) from a population
- π΄ LO: Explain the importance of a simple random sampling (SRS) for statistical inference
- π LO: Describe the difference between a simple random sample without replacement (SRS) vs simple random sample with replacement (SRSWR)
- π‘ LO: Explain the differences between sampling from finite populations and sampling from infinite populations.
- π‘ LO: Understand how finite population characteristics influence sampling design, estimation, and inference.
- π LO: Define the finite population correction factor and its role in adjusting variance estimates for SRSWOR samples from finite populations
- π LO:
apply the finite population correction factor to correct standard errors and confidence intervals for finite populations.
10: Properties of Parameter Estimators
- π΄ LO: Define an unbiased estimator
- π΄ LO: Determine whether a given estimator is unbiased
- π LO: Define and interpret2 the Mean Squared Error (MSE);
- π‘ LO:
Understanding the decomposition of MSE and its significance in evaluating the performance of estimators. - π LO: Calculate and compare the relative efficiency of two estimators
- π LO: understanding how to determine which estimator provides more precise estimates under given conditions.
- π‘ LO: Understand the concept of consistency in estimators (know that MLEβs are consistent under certain conditions)
- π‘ LO: Define the Minimum Variance Unbiased Estimator (MVUE) within the context of statistical estimation.
- π‘ LO:
Apply CRLB to identify if a unbiased estimator achieves is MVUE - π LO:
Identify and derive MVUEs using the CramΓ©r-Rao Lower bound (CRLB) theorem and understand the conditions under which an unbiased estimator achieves minimum variance. - π LO:
Use the CramΓ©r-Rao Lower Bound to find the lower bound of the variance of unbiased estimators - π’ LO:
Define Fisherβs information - π’ LO:
Explain the significance of Fisherβs information in statistical inference and parameter estimation.
11/12/13: Hypothesis Testing for one-sample
- π΄ LO: Explain the concepts of null (\(H_0)\)and alternative hypotheses (\(H_A\)), test statistics, significance levels, and p-values.
- π΄ LO: Formulate the appropriate null hypothesis (either in symbols or in words) and and alternative hypotheses given a word problem (determine the appropriate direction of the alternative: it is upper-tailed, lower-tailed, or a two-sided hypothesis test)
- π LO: Define Type I and Type II errors. Note that the conclusion of a hypothesis test might be erroneous regardless of the decision we make.
- Type 1 error is the probability of rejecting the null hypothesis when the null hypothesis is actually true.
- Type 2 error is the probability of failing to reject the null hypothesis when the alternative hypothesis is actually true.
- π΄ LO: Define the significance level (alpha) and explain its role in hypothesis testing
- π΄ LO: Identify the appropriate test statistic for a given problem
- π LO: Define the null distribution and explain its role in hypothesis testing.
- π΄ LO: Calculate and identify an observed test statistic given some data
- π LO: Understand how sample size impacts the SE of point estimators
- π΄ LO: Define sample statistic as a point estimate for a population parameter, (e.g. the sample mean \(\bar x\) is used to estimate the population mean \(\mu\)) and note that point estimate and sample statistic are synonymous.
- π΄ LO: Explain the theoretical foundations of z-tests and t-tests, including their assumptions, conditions, and applicability
- π΄ LO: Explain why the t-distribution helps make up for the additional variability introduced by using s (sample standard deviation) in calculation of the standard error, in place of Ο (population standard deviation).
- π΄ LO: Identify and check the assumptions and conditions necessary for valid z-tests and t-tests
- Note: the independence of observations in a sample is provided by a simple random sampling design.
- π΄ LO: Use graphical methods (e.g., Q-Q plots, histograms) to assess the normality assumption
- π΄ LO: know when to use a \(t\)-test vs. \(z\)-test (refer to flowchart)
- π΄ LO: Describe the different characteristics of the standard normal (i.e. \(Z\)-distribution) as compared to the Student \(t\) distribution.
- e.g. the \(t\)-distribution has a single parameter, degrees of freedom, and as the degrees of freedom increases this distribution approaches the normal distribution.
- π΄ LO: Perform one-sample z-tests and t-tests including assumptions checking, and steps outlined above.
- π LO: Understand the connection between 100*(1- \(\alpha\))% CI and two-sided hypothesis tests.
- π΄ LO: Define a \(p\)-value as the conditional probability of obtaining a sample statistic at least as extreme as the one observed given that the null hypothesis is true, i.e. \(\Pr(\text{observed or more extreme sample statistic | } H_0 \text{ true})\)
- π LO: Visualize on a null distribution the rejection region (values of the test statistic for which \(H_0\) will be rejected) and/or \(p\)-values (area under the curves)
15/16: Inference for Two Samples
Many of the Learning Outcomes from 11/12/13: Hypothesis Testing for one-sample will carry over to this unit. In addition we have:
- π LO: Explain the objectives and applications of inference for two samples in research and data analysis.
- π΄ LO: Differentiate between independent and dependent populations.
- π΄ LO: Differentiate between the three types of \(t\)-tests: paired \(t\)-tests, Welch procedure and Pooled \(t\)-tests (refer to flowchart)
- π΄ LO: Perform two-sample \(t\)-tests, including assumptions checking, and steps outlined above.
- π΄ LO: Perform the two-sample \(t\)-tests in R using the
t.test()
function- Specify the appropriate arguments:
alternative = c("two.sided", "less", "greater")
,mu = 0
,paired = FALSE
,var.equal = FALSE
,conf.level = 0.95
. - Specify the data either using
x
,y
, or formulas withdata = ...
- Specify the appropriate arguments:
- π΄ LO: Interpret the output of
t.test()
function including test statistics, degrees of freedom, \(p\)-values, and confidence intervals. - π΄ LO: Identify the degrees of freedom associated with different statistical tests.
- π LO: Define pooled variance and explain the conceptual basis in the context of two-sample hypothesis testing
17: ANOVA
- π‘ LO: Explain the conceptual basis of ANOVA, including the partitioning of variance and the F-test for assessing group differences.
- π΄ LO: Define Analysis of Variance (ANOVA) as a statistical method used to compare means across multiple groups or treatments.
- π΄ LO: State the null and alternative hypothesis for a one-way anova (either in words or symbols)
- π΄ LO: State and check the assumptions underlying the model.
- for checking normality we use visual aids (e.g. QQ plot)
- for equal variance we use the rule of thumb \(0.5 < \frac{s_a}{s_b} < 2\) where \(s_a\) and \(s_b\) are the smallest and largest sample standard deviation, respectively.
- π΄ LO: interpret side-by-side boxplots to assess it the equal variance assumption is reasonable
- π΄ LO: Identify and calculate the appropriate degrees of freedoms in an one-way ANOVA
- π’ LO: Explain the conceptual basis of degrees of freedom and its importance in hypothesis testing and parameter estimation.
- π‘ LO: Explain the meaning and importance of balanced study designs
- π΄ LO: Identify and describe the main components of an ANOVA table, including the sources of variation, degrees of freedom, sum of squares, mean squares, and the F-statistic.
- π΄ LO: Calculate missing values in an ANOVA table based on the relationships between cells
- π’ LO: Interpret the sources of variation presented in the ANOVA table, such as between-groups variation, within-groups variation, and total variation.
- π LO: Perform a one-way ANOVA in R using the
aov()
function- know the formula notation
y~x
withdata
specification
- know the formula notation
- π΄ LO: Interpret the results of hypothesis testing (either in an ANOVA table or R
aov()
output) based on the F-statistic and \(p\)-value including decisions to reject or fail to reject the null hypothesis. - π‘ LO: Describe why calculation of the p-value for ANOVA is always βone sidedβ.
- π LO: Explain the purpose and rationale for conducting post-hoc tests to identify specific group differences.
- π LO: Identify and applying appropriate post-hoc tests when necessary (e.g. pairwise pooled \(t\)-tests to discover which group means are different after a significant ANOVA result using the Bonferroni correction)
- π LO: Explain why multiple comparison procedures like Bonferonni correction are necessary to control for Type I errors in hypothesis testing.
18: Linear Regression and Correlation
- π LO: Define and identify the explanatory variable (aka independent variable or predictor), and the response variable (aka the dependent/response variable).
- π LO: Use scatter plots (explanatory variable (\(x\)) on the x-axis and the response variable (\(y\)) on the y-axis) to describe the strength and direction (positive or negative) of the linear relationship
- π LO: Define simple linear regression (SLR) as a statistical method used to model the relationship between a single independent variable (predictor) and a continuous dependent variable (outcome or response variable).
- π΄ LO: State and check the assumptions for using SLR, i.e. linearity, nearly normal residuals, constant variability (homoscedasticity).
- π΄ LO: Identify and interpret the parameters of a SLR model (\(\beta_0\) and \(\beta_1\))
- Interpret the slope as
- βFor each unit increase in x, we would expect y to increase/decrease on average by \(\mid \hat \beta_1 \mid\) unitsβ
- Note that whether the response variable increases or decreases is determined by the sign of \(\hat \beta_1\)
- Interpret the intercept as
- βWhen x = 0, we would expect y to equal, on average, \(\hat \beta_0\)β
- Explain why the intercept often does not have any practical significance
- Interpret the slope as
- π LO: Plot the and fitted SLR line and understand the graphical representation of the slope (\(\hat \beta_1\)) and intercept (\(\hat \beta_0\))
- π‘ LO: Define and identify residuals \(e_i\) as the difference between the observed \(y\) and predicted \(\hat y\) values of the response variable.
- π‘ LO: Explain how parameters are estimated using ordinary least squares (OLS), i.e. the OLS estimators are those that minimize the sum of the squared residuals
- LO:
Derive the OLS estimators - π΄ LO: Make predictions based on the fitted line using \(\hat y = \hat \beta_0 + \hat \beta_1 x\)
- π‘ LO: Interpret the values of the Pearsonβs correlation coefficient (\(r\))
- π‘ LO: Describe the relationship between Pearsonβs correlation coefficient (\(r\)) and the coefficient of determination, denoted \(R^2\) (AKA R-squared value) in SLR:
- This value is calculated as the square of the correlation coefficient, and is between 0 and 1, inclusive.
- An R-squared value of 1 indicates a perfect fit of the regression model to the observed data
- π LO: Use residual plots to identify potential outliers (any unusual observations that stand out)
- π LO: Assess the Residual vs. Fitted plots to check the assumptions
- π LO: Assess the QQ Residuals plot to check the normality assumptions
- π‘ LO: Define extrapolation and distinguish it from interpolation (predicting for values of \(x\) that are in the range of the observed data).
- π‘
LO: Perform hypothesis tests on the slope3coefficient in a simple linear regression model. - π΄ LO: Fit a linear model in R using the
lm(formula, data)
function. - π΄ LO: Identify (from the summary output of an
lm()
model in R) the parameter estimates (and therefore the fitted OLS line) - π΄ LO: Interpret the summary output of an
lm()
model in R to determine if a significant linear relationship exists (by interpreting the \(p\)-value associated with the slope parameter) - π΄ LO: Identify (from the summary output of an
lm()
model in R) the \(R^2\) value and interpret its meaning as the percentage of the variability in the response variable explained by the explanatory variable.
19: Chi-square tests
π΄ LO: construct and interpret contingency tables (along with marginals) to organize and two categorical variables.
π΄ LO: Use side-by-side box plots for assessing the relationship between a numerical and a categorical variable
π LO: Determine the size of a contingency table ( \(r \times c\) )
π΄ LO: Conduct goodness-of-fit tests to determine how well observed data fit a specific distribution.
π΄ LO: perform the two types of chi-square tests:
- tests for one-way tables (AKA goodness of fit tests) and
- test for two-way tables (AKA tests for independence)
following the steps outlined above. Note you should be able to β¦.
- calculated an expected cell count
- compute the chi-squared test statistic by hand
- find critical values/approximate \(p\)-values for this tests using the Chi-square table
π LO: State and check the assumptions for these tests
π LO: Use the
chisq.test()
to conduct the appropriate chi-squared testsπ LO: Interpret the output of
chisq.test()
Footnotes
You can alternative
attach()
your data frame which makes the columns of the data frame available as if they were named vector objects in R.β©οΈUnderstand how larger MSE values indicate greater discrepancy between estimated and true values, while smaller MSE values indicate better performance.β©οΈ
Note that a hypothesis test for the intercept is often irrelevant since itβs usually out of the range of the data, and hence it is usually an extrapolation.β©οΈ