Lecture 3: Assessing Regression Models
University of British Columbia Okanagan
Last lecture we introduced some notation and terminology that we will be using throughout the course.
We discussed how different tasks, namely inference vs prediction may lead us to favour certain models over others.
Today we discuss the topic of model assessment to address the challenging task of model selection
There is no one-size-fits-all best model.
This course aims to introduce you a small subset of the machine learning approaches available each with their own set of limitations.
Selecting the best approach for a particular problem can be one of the most challenging tasks for a data scientist.
Today’s lecture will add to this list of considerations when choosing a model in the supervised regression setting …
If our data is labelled, we may want to investigate how our predictions for some response/output
Naturally, the closer our predicted values are to the true response value, the better.
Our response variable will generally be one of two forms: categorical or numeric. Today we focus on assessing models with a numeric response.
Recall our general model for statistical learning
Goal: find an
When the response variable is numeric, we fall into the category of regression (the topic for the next three lectures)
In the context of regression, the most commonly-used metric for assessing performance is the mean squared error (or MSE)
In words, the MSE represents the average squared difference of a prediction
where
In the simple linear regression context, this was known as the Residual Sum of Squares (or RSS).
Given some data:
Fit a model
For each
… and predicted value
We average the squared differences to get MSE
Notice that this has some desirable properties:
MSE is small when predicted values ,
MSE is large when predicted values,
Question: Would this be a good model?
We denote the training set, i.e. the collection of observations we use to fit our model by:
We will denote the testing set, i.e. the collection of observations that we keep separate from the fitting process and reserve for assessing the model by:
We might find our
Recall that
When MSE is calculated using
MSE train = average squared differences using training data
MSE test = average squared differences using the test data
We are interested in the accuracy of predictions to data (e.g. new patients, future stock prices) it’s never seen before.
In general,
reducible error which we can reduce by picking a better statistical learning technique; and
irreducible error which we can never improve even if we estimate
Suppose, for notational convenience, we have a training set
Recall
Hence
Reducible error can be further decomposed into error due to squared bias and variance.
The reducible error as its name suggest is the portion of the error in a model that can be reduced or eliminated through improvements in the modeling process.
Our goal can now be rephrased as minimizing the reducible error (also known as model error) as much as possible.
All together we have the expected test MSE at a new point
The variance of error term
The irreducible error is a measure of the amount of noise in our data.
Even if we estimate
Important no matter how good we make our model, our data will have certain amount of noise or irreducible error that can not be removed.
Bias refers to the error that is introduced by approximating a real-life problem, which may be extremely complicated, by a much simpler model.
Variance refers to the amount by which
Inspired by Essays by Scott Fortmann-Roe
Low Bias (first row) Models that approximates the real-life problem well will have low bias (hits will be centered around the bullseye)
High Bias (second row) will systematically be “off the mark”
Low variance (first column) indicates that
High variance (second column) indicates that
Let’s simulate a training set of size
Training set #1: This is one example of a potential training set
Training set #2: Here’s another…
Since we know
Let’s start by fitting a simple linear regression (SLR) model to training set #1. The resulting fitted line
Fitting a SLR to training set #2 will produce this (slightly different) fitted line
The fitted line for 10 different fits using 10 different training sets.
If we do this repeatedly on different training sets simulated from the same model described on this slide, you will notice that we don’t get very much variation in our fitted model.
This model is therefore said to have low variance.
That is to say, it is not sensitivity to small fluctuations in the training set.
High bias refers to a situation where a model makes strong simplifying assumptions about the data, resulting in systematic errors in its predictions.
Underfitting is a consequence of high bias. It refers to the poor performance of a model on both the training and test data due to its inability to capture the data’s underlying patterns.
To demonstrate this concept we’ll use the local polynomial model (using the loess
function in R).
We will call this the “green model”
Don’t worry if you don’t know what this model is. We are unlikely to cover these in this course — just consider it a relatively flexible model.
The span
argument controls level of flexibility. The “green model” will fit a local polynomial with a lot of flexibility.
Fitting a highly flexible loess model to training set #1 will produce this fitted curve
Fitting a highly flexible loess model to training set #2 will produce this (very different) fitted curve
The fitted model for 10 different fits using 10 different training sets.
High Variance models tend to to capture noise and random fluctuations in the training data rather than the underlying patterns.
Models like these are said to be overfitting to the training data. Overfitting tends to result in very low training error and comparitively high testing error (poor generalization)
While small changes in training set causes the estimate
Thus the green model has low bias.
Even though a single fitted model corresponds too closely to the training data, on average this model is close the truth.
As before we will use the local polynomial model (using the loess
function in R) to fit this model.
We will call this the “blue model”
We will adjust the span
argument to decrease the level of flexibility.
Fitting a loess model with medium flexibility to training set #1 will produce this fitted curve
Fitting a loess model with medium flexibility to training set #2 will produce this (different) fitted curve
The fitted model for 10 different fits using 10 different training sets.
If we do this repeatedly on different training sets simulated from the same model described on this slide, you will notice that we don’t get very much variation in our fitted model.
This model is therefore said to have low variance
That is to say, it is not sensitivity to small fluctuations in the training set.
The blue model has low bias.
That is, on average the estimate is close the truth.
Unlike the green model (which also has low bias), this model is not overfitting to the data (low bias).
The blue model strikes a nice balance between low variance and low bias.
Let’s explore what the average model looks like for each of these scenarios …
Image adapted from MollyMooCrafts
The bias–variance tradeoff is the conflict in trying to simultaneously minimize these two sources of error that prevent supervised learning algorithms from generalizing beyond their training set:[1][2][3]
Since the test MSE is comprised of both bias and variance we want to try to reduce both of them.
However, as we decrease bias, we tend to increase variance and vice versa.
To manage the variance-bias tradeoff effectively, you can employ techniques such as cross-validation, regularization, feature selection, and ensemble methods (e.g., bagging and boosting), all of which will be discussed throughout the course.
These techniques help you find the optimal model complexity and reduce overfitting or underfitting, ultimately improving a model’s generalization performance.
The following simulations are taken from your ISLR2 textbook.
Looking at a variety of different data we can gain some general insights on
Again, the particulars about the specific models (linear regression and loess smoothing splines) are not important.
What is important is it understand how the level of flexibility impact the bias and variance of the models (and therefore the MSE).
ISLR Figure 2.9 (left plot) Data simulated from
High bias, low variance
Low Variance the oranage fit would not have much variability from training set to training set
High Bias it systematically underestimate between 40–80 and overestimate towards the boundaries, for example.
Low bias, high variance
As the green curve is the most flexible, it matches the training data very closely
However, it is much more wiggly than
Low bias, low variance
The blue curve strikes the balance between low variance and low bias
As one may expect, the average fitted curve is quite similar to
To estimate the expected
The average training MSE and testing MSE is plotted in gray and red, respectively, as a function of flexibility.
The flexibility of these models are mapped to numeric values on the
Squares (🟧, 🟦, 🟩) indicate the MSEs associated with the corresponding orange, blue, and green models.
ILSR Figure 2.9 (right plot) Training MSE (grey curve), test MSE (red curve), and minimum possible test MSE over all methods (dashed line). Squares represent the training and test MSEs for the three fits shown in the left-hand panel.
Test MSE
The orange and green models have high
Blue is close to optimal
Minimizing test MSE
The horizontal dashed line indicates
This line corresponds to the lowest achievable test MSE among all possible methods.
Hence, the “blue model” is close to optimal.
Training MSE
The green curve has the lowest training MSE of all three methods, since it corresponds to the most flexible of the three curves fit in the left-hand panel
Training MSE
The orange curve has the highest training MSE of all three methods, since even on the training set, it is not flexible enough to approximate the underlying relationship
Training MSE
The blue curve obtains a similar training and testing MSE.
The U-shaped test MSE is a fundamental property of statistical learning that holds regardless of the particular dataset and statistical method being used.
As model flexibility increases, the training MSE will decrease, but the test MSE may not.
When a method yields a small training MSE but a large test MSE, is is said to be overfitting the data.
We almost always expect the training MSE to be smaller than the test MSE.
The training MSE declines monotonically as flexibility increases.