View forecast error metrics (View results)

Created by Shyam Sayana, Modified on Mon, 3 Nov at 8:16 AM by Shyam Sayana

TABLE OF CONTENTS

The View Results section provides a detailed breakdown of the forecast error metrics, including MAD, MAPE, SMAPE, MASE, and RMSE. These metrics help evaluate the forecast's accuracy by comparing predicted values with actual data. 

Clicking the View Results button opens a side panel that displays detailed error metrics.

After clicking View Results, the side panel will display MAPE and RMSE as the default error metrics. table showing detailed error metrics (MAD, MAPE, SMAPE, MASE, and RMSE) for the selected items will also appear. 

Click the view details button to see all the error metrics and access the entire table.


Clicking View Details will navigate you to a detailed error metrics screen, where you can access a comprehensive breakdown of all the forecasting error metrics.

The error metrics are displayed in graphical and tabular formats to comprehensively analyze forecast accuracy.

Charts/Visualization - Error metrics

MAPE(Mean Absolute Percentage Error)

The Mean Absolute Percentage Error (MAPE) in the graph represents the percentage difference between the forecasted and actual values. It measures forecast accuracy by calculating the average absolute percentage difference between predicted and actual values.

The X-axis represents different MAPE percentage ranges, and the Y-axis represents the item count.

  • Most items fall within the 0%–7% MAPE range, indicating low forecast error and high accuracy.

  • Some items fall in the 7%–14% range, indicating a moderate error.

  • Significantly fewer items fall in higher MAPE ranges (above 14%), suggesting that most forecasts are relatively accurate.

RMSE (Root Mean Squared Error )

RMSE is a commonly used error metric that measures the average magnitude of forecasting errors. It calculates the square root of the mean squared difference between actual and forecast values. The x-axis represents the RMSE ranges, and the y-axis shows the number of items within each RMSE range.

  • Many items fall in the 0-111 RMSE range, indicating that most forecasts have relatively low errors.

  • A few items have RMSE values in higher ranges, suggesting higher forecast deviations for those specific items.


MAD (Mean Absolute Deviation)


MAD measures the average absolute difference between actual and forecasted values, providing a straightforward measure of forecast accuracy. The x-axis represents different MAD ranges (e.g., 0-86, 86-172, etc.), and the y-axis shows the count of items within each MAD range.

  • Most items have MAD values between 0-86, indicating that most forecast errors are relatively small.

  • A few items fall into higher MAD ranges, indicating greater deviations from actual values.

SMAPE (Symmetric Mean Absolute Percentage Error)


SMAPE is a forecasting error metric that calculates the percentage difference between actual and forecasted values, normalized by their average. It helps measure forecast accuracy while handling scale differences.

  • Most items have SMAPE values between 0% and 6%, indicating high forecast accuracy.

  • Fewer items fall into higher SMAPE ranges, indicating those forecasts have relatively larger percentage errors.


MASE (Mean Absolute Scaled Error)

MASE is a forecasting accuracy metric that compares the absolute error of a model’s predictions to the mistake of a naïve baseline method (such as a simple moving average). It helps determine if a forecasting model performs better than a fundamental benchmark.

  • Most items have MASE values between 0 and 1, meaning the forecast is better than a naïve method.

  • A few items have MASE values above 1, indicating that the model performs worse than the naïve benchmark for those cases.

Table View for Error metrics

The error metrics are presented in tabular format, enabling planners to analyze and compare forecast accuracy across items efficiently. You can scroll down the error metrics screen to access the error metrics details in a tabular format.


The error metrics table also contains advanced attributes. To view them, you can enable the toggle shown below.



After enabling the toggle, you can see the advanced attributes in the table below.


The advanced attributes include the following.

Forecast methods

When the planner selects the Best model (let the forecast engine choose) during forecast creation, the forecast engine will select the best model to calculate the forecast's statistical values.


Standard deviation:

Measures how much demand fluctuates from its average over time. A higher standard deviation indicates less predictable, more volatile demand, while a lower value suggests steadier demand. It helps planners understand the level of uncertainty in historical demand.


CV² (Coefficient of Variation Squared):

Represents demand variability normalized by the mean demand, making it easier to compare across products with different volumes. A lower CV² means a more stable and consistent demand, whereas a higher value indicates greater irregularity. It is often used in demand classification models.


Average demand interval:

Indicates the average time gap (in periods) between two consecutive non-zero demand occurrences. A higher value suggests that demand occurs less frequently, signaling intermittent or sporadic demand patterns. It helps identify slow-moving or non-continuous-demand items.


Non-zero demand intervals:

Represents the number of periods in which demand was greater than zero. A higher count indicates more regular demand, while a lower count indicates intermittent or lumpy demand. This metric helps planners assess demand continuity and forecast reliability.


Demand classification:

Categorizes demand into types such as smooth, erratic, lumpy, or intermittent based on its variability and frequency. This classification helps planners apply the proper forecasting technique for each product. It improves accuracy by aligning models with actual demand behavior.


Trend strength:

Indicates whether demand is moving up or down over time. A strong positive trend implies growing demand, while a negative trend suggests declining interest or sales. Tracking trend strength helps in adjusting forecasts and inventory plans proactively.


Seasonality strength:

Measures how strongly seasonal patterns influence demand across time periods. High seasonality strength indicates that demand is concentrated in specific cycles (e.g., holidays, harvest seasons), while low seasonality indicates that demand is relatively uniform. This helps in fine-tuning seasonal forecast adjustments.

Error

The Error column provides insights into whether a selected product has sufficient historical data. Some forecasting methods require at least six months of history, and if an item lacks enough data, this column will indicate the issue precisely for that item. This helps planners identify cases where the forecast method might not be suitable due to limited historical data.


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article