Analyzing the Results of a Classification Model in SAP Analytics Cloud Smart Predict

Objective

After completing this lesson, you will be able to Analyze the results of a classification model.

Overview Report

Global Performance Indicators and Target Statistics

Global performance indicators:

  • Predictive power measures the accuracy of the predictive model. It takes a value between 0% and 100%. This value must be as close as possible to 100%, without being equal to 100%.
  • Predictive confidence indicates the capacity of your predictive model to achieve the same degree of accuracy when you apply it to a new data set, which has the same characteristics as the training data set. Prediction confidence takes a value between 0% and 100%. This value must be as close as possible to 100%.

Target statistics:

Gives the frequency (%) of the two target categories, in this case 1 and 0, in the training and validation subsets.

The Global Performance Indicators and Target Statistics for the overview report in a classification model.

Influencer Contributions and % Detected Target

Influencer Contributions:

Shows the relative importance of each variable used in the predictive model. It examines the influence on the target of the top five variables used in the predictive model. It's a useful report to spot if there are leaker variables in a model, as the leaker has a suspiciously high contribution, overwhelming all of the other variables.

% Detected Target:

Compares the classification model performance (on the validation subsample of the data) to a random model and a hypothetically perfect (100% accurate) model.

The % detected target curve compares the model to the hypothetically perfect and random models. It shows the percentage of the total population (x-axis) that corresponds to the % of positive detected targets (y-axis) given by the classification model.

  • If the model was perfectly accurate, then the blue model curve would overlap the green perfect model curve. Predictive power = 100%.
  • If the model was perfectly inaccurate, then the blue model curve would overlap the red random curve. Predictive power = 0%.
The Influencer Contributions and % detected target for the overview report in a classification model.

Influencer Contributions Report

The influencer contributions report shows the relative importance of each influencer used in the predictive model. It allows you to examine the influence on the target of each influencer used in the predictive model.

Influencer Contributions

Examines the relative significance of all of the variables within the model. It is a useful report to spot if there are leaker variables in a model, as the leaker has a suspiciously high contribution, overwhelming other variables.

The influencers are sorted by decreasing importance. The more contributive ones explain the target best. Only the contributive influencers are displayed in the reports. The influencers with no contribution are hidden.

The sum of their contributions equals 100%.

The Influencer Contributions chart for the influencer contributions report in a classification model.

Group Category Influence

This chart shows groupings of categories of a variable, where all the categories in a group share the same influence on the target variable. The report shows which category group has the most influence. However, to further examine the results by influencer, you can change the category group by using the input help in this chart.

In this example, for the influencer AGE, the category 45–54 has the highest positive influence on the target.

The Grouped Category Influence chart for the influencer contributions report in a classification model.

Group Category Statistics

This chart shows the details of how the grouped categories influence the target variable over the selected dataset.

  • The x-axis displays the target mean. For a nominal target, the target mean is the frequency of positive cases for the target variable contained in the training data set.
  • The y-axis displays the frequency of the grouped category in the selected data set.
The Grouped Category Statistics chart for the influencer contributions report in a classification model.

Confusion Matrix Report

Confusion Matrix

Confusion matrix: This table shows the performance of a classification algorithm by comparing the predicted value of the target variable to its actual value.

Also known as the error matrix, each column represents the observations in a predicted category, while each row represents the observations in an actual class.

The confusion metrics table for the confusion metrics report in a classification model.

Metrics

You can use the confusion matrix to compute metrics to associate with different needs.

This table shows you how to read the metrics:

MetricsDefinitionFormula
Classification rateProportion of targets accurately classified by the model when applied on the validation data set.(TP+TN)/N
SensitivityProportion of actual positive targets that have been correctly predicted.TP/(TP+N)
SpecificityProportion of actual negative targets that are positive targets.TN/(FP+TN)
PrecisionProportion of predictive positive targets that are eventually positive targets.TP/(TP+FP)
F1 scoreHarmonic mean of precision and recall (recall and precision are evenly weighted).2/((1/Precision)+(1/Sensitivity))
Fall-outThe proportion of negative targets that have been incorrectly detected as positive.FP/FP+TN) or (100%-Specificity)

Definitions:

  • N: The number of observations.
  • TP (true positive): The number of correctly predicted positive targets.
  • FN (false negative): The number of actual positive targets that have been predicted negative.
  • FP (false positive): The number of actual negative targets that have been predicted positive.
  • TN (true negative): The number of correctly predicted negative targets.
The metrics table for the confusion metrics report in a classification model.

Profit Simulations

Profit Simulation Report

Associate a profit or cost with the positive categories (observations that belong to the population you want to target) of the confusion matrix. You can visualize your profit, based on the selected threshold, or automatically select the threshold, based on your profit parameters.

Unit Cost Vs. Profit and Total Profit

Unit cost Vs. profit:

Set the threshold that determines which values are considered positive (see the relevant link) and provide the following:

  • Cost per predicted positive: Define a cost per observation, classified as positive by the confusion matrix. This covers the costs for true positive targets (actual positive targets that have been predicted as positive) and false-positive targets (actual negative targets that have been predicted positive).
  • Profit per actual positive: Define a profit per true positive target (targets correctly predicted as positive) identified by the confusion matrix.

To see the threshold that gives you a maximum profit for the profit parameters that you have set, select the Maximize Profit button in this report.

Total profit:

This table is updated accordingly to calculate your profit and cost. You obtain an estimation of the gap between the gain of the action, based on a random selection (without any predictive model) and the gain, based on the selection.

The Profit Simulation report for the classification model

Performance Curves Report

Performance Curves

Evaluate the accuracy of your predictive model by using the performance curves. Use the performance curves report to compare the performance of your predictive model to a random model, and a hypothetical perfect predictive model.

% Detected Target

The % detected target curve is also displayed in the overview report. It compares the model to the perfect and random models.

The example in the lift curve selects 20% of the total population. This campaign reaches 3.09 times more positive cases by using the predictive model, compared to random customer selection (set at lift = 1.00). It shows the percentage of the total population (on the x-axis) that corresponds to the % of positive detected targets (shown on the y-axis) given by the classification model.

The % detected target graph in the performance curves report for the classification model.

Lift

The lift curve shows how much better the predictive model is than a random selection.

The x-axis shows the percentage of the population and is ordered from the highest probability of a positive detected target to the lowest probability of a positive detected target.

The y-axis shows how much better your model is than the random predictive model (lift).

The random selection, shown in red, is set where Lift = 1.00

Case study: In the example, a bank wants to create an advertising campaign. They have built a classification model to target which customers to send the campaign to.

The predictive model classifies the customers into two categories:

  • Positive targets: the customers respond to the campaign.
  • Negative targets: the customers does not respond to the campaign.

A perfect predictive model reaches 4.19 times more positive cases than the random selection.

The lift graph n the performance curves report for the classification model.

Sensitivity

The sensitivity curve, or ROC (Receiver Operating Characteristic) curve, shows how well a model discriminates in terms of the trade-off between sensitivity and specificity, or, between correct and mistaken detection, as the detection threshold is varied.

The curve is constructed by plotting the true positive rate (TPR) against the false positive rate (FPR).

The true positive rate is the proportion of observations that were correctly predicted to be positive out of all positive observations (TP/(TP + FN)). Similarly, the false positive rate is the proportion of observations that are incorrectly predicted to be positive out of all negative observations (FP/(TN + FP)). For example, in medical testing, the true positive rate is the rate in which people are correctly identified to test positive for the disease in question.

  • Sensitivity: The probability that the model predicts a positive outcome for an observation when the outcome is actually positive. This is the TPR.
  • Specificity: The probability that the model predicts a negative outcome for an observation when the outcome is negative. 1-Specificity is the FPR. A model with high sensitivity and high specificity has a sensitivity curve that hugs the top left corner of the plot.

A model with low sensitivity and low specificity has a curve that is close to the 45-degree diagonal line. As a baseline, a random classifier is expected to give points lying along the diagonal (FPR = TPR). The closer the curve comes to the 45-degree diagonal of the sensitivity space, the less accurate the test.

The sensitivity graph n the performance curves report for the classification model.

Lorenz

Lorenz (1-sensitivity) displays the cumulative proportion of missed signals (false negatives) accounted for by the records corresponding to the bottom x% of model scores.

The lorenz (specificity) curve, displaying the cumulative proportion of true negatives (specificity) accounted for by the bottom x% of model scores, is also available by using the dropdown menu for this graph.

The lorenz graph in the performance curves report for the classification model.

Density

The density curves display the density function of the variable score for the target category (curve density "positive") and for the nontarget category (curve density "negative").

The density curves display the density function of the score (probability that an observation belongs to each class) for positive and negative targets.

  • The estimated density function in an interval is equal to the formula: (Number of observations in the interval/total number of observations)/length of the interval.
  • The length of an interval is its upper bound minus its lower bound.
  • The X-axis shows the score and the Y-axis shows the density.

As a default view, a line chart is displayed with the following density curves:

  • Positives: This curve displays the distribution of population with positive target value per score value.
  • Negatives: This curve displays the distribution of population with negative target value per score value.

Question: Is the example below a good, bad, or confused predictive model?

The density graph in the performance curves report for the classification model.

Answer: It is a confused predictive model. A confused predictive model has no big areas of overlap between the positive and negative cases. This means further investigation is required. Consider adding additional metrics, graphs, and check input variables.

What makes a good and bad predictive model?

The graph below is an example of a bad predictive model. There is a large zone with high density for both the positive (the blue line) and negative (the yellow line) cases. This means the predictive model is not very good at separating the positive and negative cases.

A visual representation of a bad density curve without data.

The graph below is an example of a good predictive model. There is a small overlapping zone with low density. This means the predictive model is pretty good at separating the positive and negative cases.

A visual representation of a bad density curve without data.

Next Steps

When you have analyzed your predictive model, you have two choices:

1. The predictive model's performance is satisfactory. If you are happy with your model's performance, then use it and apply the model.

2. The predictive model's performance must be improved. If you are unhappy with the model's performance, experiment with the settings.

To do this, you can:

  • Duplicate the predictive model.
    1. Open the predictive scenario, which contains the predictive model to be duplicated.
    2. Open the Predictive Model list.
    3. Click of the predictive model level to be duplicated, and, from the menu, select Copy. This creates an exact (untrained) copy of the original version of the predictive model.
    4. Compare the two versions and find the best one.
  • Update the settings of the exiting model and retrain it. This deletes the previous version.

Log in to track your progress & complete quizzes