Regression Performance Metrics

To analyze the performance of regression, several evaluation measures are used. These performance metrics are important for performing quantitative analysis of the results produced by the regression models.

The various metrics used to evaluate the results of the prediction are :

  1. Mean Squared Error (MSE)
  2. Adjusted R² 

They are described below one-by-one.  

 

1. Mean Squared Error (MSE)

MSE is a widely-used evaluation measure of the difference between the values predicted by a regression model and the actual values found from the system. It is simply the average of the squared difference between the target value eactual  and the value predicted by the regression model epredicted as:

    MSE = ∑k (eactual,k – epredicted,k)2 / n    ———> (1)   [for k=1, 2, … , n]

where eactual  are the actual values and epredicted are the predicted values for ∀k. Here, ‘n’ denotes the number of data records present in the database.

 

2. Root-Mean-Squared Error (RMSE)

RMSE is a well-known performance measure of the dissimilarity between the values predicted by a regression model and the values actually found from the system being modeled. The RMSE of a regression model’s estimation with regard to the predicted variable epredicted is the square root of the Mean Squared Error (MSE):

     RMSE = [∑k (eactual,k – epredicted,k)2]  / n  ———> (2)   [for k=1, 2, … , n]

where eactual  are the actual values and epredicted are the predicted values for ∀k. Here, ‘n’ denotes the number of data records present in the database.

 

3. Mean Absolute Error (MAE)

MAE is the absolute difference between the target value t and the value y predicted by the model. It is more robust to outliers as compared to MSE. The formula for MAE is shown below

      MAE = ∑ (t – y) / n     ———> (3)   

Here, ‘n’ denotes the number of data records present in the database.

 

4. R² or Coefficient of Determination

(pronounced as R-Squared) or Coefficient of Determination is a very useful metric for evaluating the performance of a regression model. It is a statistical measure of fit that indicates how much variation of a dependent variable is explained by the independent variable(s) in a regression model. In other words, R2 shows how well the data fit the regression model (i.e. the goodness of fit).

It is basically the square of the correlation (R) and measures the proportion of variation in the dependent variable that can be attributed to the independent variable. Mathematically, R-squared is calculated by dividing sum of squares of residuals (SSeres) by total sum of squares (SStot) and then subtract it from 1. In this case, SStot measures total variation. SSreg measures explained variation and SSres measures unexplained variation.

As SSres + SSreg = SStot and  R² = Explained variation / Total Variation, the formula for is given by:

R2 values range from 0 to 1 and are commonly stated as percentages from 0% to 100%. The most common interpretation of this metric is how well the regression model fits the observed data. For example, an of 70% reveals that 70% of the data fit the regression model. Generally, a higher value indicates a better fit for the model.

However, it is not always the case that a high is good for the regression model. The quality of the statistical measure depends on many factors, such as the nature of the variables employed in the model, the units of measure of the variables, and the applied data transformation. It basically suffers from the problem that the scores improve on increasing terms even though the model is not improving which may misguide the data analysts.

⇒ N.B. The value of R² may be negative sometimes as because actually it ranges from -∞ to 1, though the theoretical value lies between 0 and 1.

 

5. Adjusted R² 

In order to solve the problem mentioned above, an improvement of has been made known as Adjusted R² which indicates the same meaning as . It is always lower than as it adjusts for the increasing predictor variables and only shows improvement if there is a real improvement.

Basically, the use of an Adjusted R2 (denoted by and pronounced “R bar squared“) is an attempt to account for the phenomenon of the R2 automatically and spuriously increasing when extra predictor variables are added to the model. 

The Adjusted R2 is defined as:

where p is the total number of predictor variables in the model (excluding the constant term), and n is the sample size.