View forecast error metrics (View results)

Created by Shyam Sayana, Modified on Mon, 21 Apr at 7:54 AM by Shyam Sayana

The View Results section provides a detailed breakdown of the forecast error metrics, including MAD, MAPE, SMAPE, MASE, and RMSE. These metrics help evaluate the forecast's accuracy by comparing predicted values with actual data. 

Clicking on the View Results button will open a side panel displaying detailed error metrics.

After clicking View Results, the side panel will display MAPE and RMSE as the default error metrics.

table showing detailed error metrics (MAD, MAPE, SMAPE, MASE, and RMSE) for the selected items will also appear. 

Click the view details button to see all the error metrics and access the entire table.


Clicking View Details will navigate you to a detailed error metrics screen, where you can access a comprehensive breakdown of all the forecasting error metrics.

The error metrics details are displayed in graphical and tabular formats to analyze forecast accuracy comprehensively.

Graph

MAPE

The Mean Absolute Percentage Error (MAPE) in the graph represents the percentage error in the forecasted values compared to the actual values. It measures how accurate the forecast is by calculating the average absolute percentage difference between predicted and actual values.

The X-axis represents different MAPE percentage ranges, and the Y-axis represents the item count.

  • Most items fall in the 0%–7% MAPE range, meaning they have low forecast error and high accuracy.

  • Some items fall in the 7%–14% range, indicating a moderate error.

  • Significantly few items fall in higher MAPE ranges (above 14%), suggesting that most forecasts are relatively accurate.

RMSE

RMSE is a commonly used error metric that measures the average magnitude of forecasting errors. It calculates the square root of the mean of squared differences between actual and forecasted values. The x-axis represents the RMSE ranges, and the y-axis shows the number of items within each RMSE range.

  • Many items fall in the 0-111 RMSE range, indicating that most forecasts have relatively low errors.

  • A few items have RMSE values in higher ranges, suggesting higher forecast deviations for those specific items.


MAD


MAD measures the average absolute difference between actual and forecasted values, providing a straightforward measure of forecast accuracy. The x-axis represents different MAD ranges (e.g., 0-86, 86-172, etc.), and the y-axis shows the count of items within each MAD range.

  • Most items have MAD values between 0-86, indicating that most forecast errors are relatively small.

  • A few items fall into higher MAD ranges, meaning those forecasts have more significant deviations from actual values.

SMAPE


SMAPE is a forecasting error metric that calculates the percentage difference between actual and forecasted values, normalized by their average. It helps measure forecast accuracy while handling scale differences.

  • Most items have SMAPE values between 0%-6%, indicating a high forecast accuracy.

  • Fewer items fall into higher SMAPE ranges, meaning those forecasts have relatively more significant percentage errors.


MASE

MASE is a forecasting accuracy metric that compares the absolute error of a model’s predictions to the mistake of a naïve baseline method (such as a simple moving average). It helps determine if a forecasting model performs better than a fundamental benchmark.

  • Most items have MASE values between 0 and 1, meaning the forecast is better than a naïve method.

  • A few items have MASE values above 1, indicating that the model performs worse than the naïve benchmark for those cases.

Table

The error metrics are in a tabular format, which allows planners to analyze and compare forecast accuracy across different items efficiently.

Scroll down the error metrics screen to access the error metrics details in the tabular format.


The error metrics table also contains advanced attributes. To view them, enable the toggle shown below.



After enabling the toggle, you can see the advanced attributes in the table below.


The advanced attributes include the following.

Forecast method

When the planner selects the Best model (let the forecast engine choose) during forecast creation, the forecast engine will select the best model to calculate the statistical values for the forecast.

Standard deviation

Measures the variability in demand over time. Higher values indicate more fluctuation.

CV2 (Coefficient of Variation Squared)

A measure of demand variability, helping classify demand patterns. Lower values indicate more stable demand.

Average demand interval

The average time gap between non-zero demand occurrences.

Non-zero demand intervals

The number of periods with recorded demand is helpful in assessing intermittency.

Demand classification

Categories demand patterns (e.g., smooth, erratic, lumpy).

Trend strength

Indicates whether demand is increasing or decreasing over time.

Seasonality strength

Measures the impact of seasonal patterns on demand.

Error

The Error column provides insights into whether a selected product has sufficient historical data. Some forecasting methods require at least six months of history, and if an item lacks enough data, this column will indicate the issue precisely for that item. This helps planners identify cases where the forecast method might not be suitable due to limited historical data.


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article