Forecast vs. Actual Reporting in Forecast Pro TRAC

tipsandtricks

In this installment of Tips & Tricks we detail one of the many reports available in Forecast Pro TRAC–the Actual vs. Archive report. This important report allows you to track your forecasts’ accuracy by comparing them to what actually occurred. Tracking accuracy provides several benefits such as: information that enables you to improve forecast performance; insight into expected performance; and the ability to spot problems early on.

Why Track Forecast Accuracy?

There are many reasons to track forecast accuracy. Some key reasons include:

  • Improving your forecasting process requires the ability to track accuracy. Forecasting should be viewed as a continuous improvement process. Your forecasting team should be constantly striving to improve the forecasting process and forecast accuracy–doing so requires knowing what is working and what is not. This includes assessment of your statistically-based forecasts as well as your judgmental forecasts.
  • Tracking accuracy provides insight into expected performance. A forecast is more than a number. To use a forecast effectively you need an understanding of its expected accuracy. This can help drive inventory, service level, and allocation decisions and policies.
  • Tracking accuracy allows you to benchmark your forecasts. If you are lucky enough to be in an industry with published statistics on forecast accuracy, comparing your accuracy to these benchmarks provides insight into your forecasting effectiveness. If industry benchmarks are not available (usually the case), periodically benchmarking your current forecast accuracy against your earlier forecast accuracy allows you to measure your improvement.
  • Monitoring forecast accuracy allows you to spot problems early. An abrupt unexpected change in forecast accuracy is often the result of some underlying event. For example, if unbeknownst to you, a key customer decides to carry a competitor’s product, your first indication might be an unusually large forecast error. Routinely monitoring forecast errors allows you to spot, investigate and respond to these changes early on—before they turn into bigger problems.

Within-Sample Error vs. Out-of-Sample Error

When discussing forecast error, an important distinction needs to be drawn between two commonly used and reported types of forecast error: “Within-Sample” error and “Out-of-Sample” error.

Within-Sample Error is a measure of how well a model fits or replicates a set of known historic data. In a sense, within-sample error is a measure of how well a model performs as a descriptor of the past. Within-sample error is backward-looking.

Out-of-Sample Error is a measure of how well a forecast model (or process) performs as a predictor of the future. With out-of-sample error, a forecast made for some point in the future is compared against what actually happens. Out-of-sample error is forward-looking.

In the above screenshot the green line represents a historical data series (monthly data starting in June 2011 and running through June 2015). The red line shown on the left side of the graph represents the forecast model. You can visually inspect the “fit” of the model to the history on the graph and also report on various “within-sample” error measures (e.g., MAPE, MAD, R-square).

The red line on the right side of the graph represents the forecasts for the future. In order to measure out-of-sample error (how well the model performs as a predictor) the forecast needs to be compared against what actually happens in the future. In order to measure the out-of-sample forecast error we need to store or archive the forecast we’ve made for December until time passes and we know the actual December values.

Creating the Archive vs. Actual Report

Forecast Pro TRAC stores all of the forecasts made over time in the Forecast Pro TRAC forecast archive. The archive stores both the original statistical forecast as well as the final (i.e., adjusted) forecast. The results are displayed in the Archive vs. Actual Report.

In order to create an Archive vs. Actual report you need to be working with a project which includes archived forecasts. An easy indication that you are working with such a project is that the View tracking report and graph icon on the toolbar is active (not grayed out) and looks like the example below.3The View tracking report and graph icon is used to view the Waterfall Report (Forecast Accuracy Measurement & Tracking). The Archive vs. Actual report is one of the exception reports available. To access the Exception report window, click on the toolbar icon shown below.

4Right-mouse clicking in the Exception report window and then choosing Exception report settings… from the context menu displays the dialog box shown below. There are six tabs, one for each of the available exception reports. In the screenshot below, the Archive vs. Actual tab is selected.

The Archive vs. Actual report can be customized in a number of ways.  For instance:

  • It can report on only the most recent historical period or on multiple historic periods
  • It can report on all forecasts or report by exception, reporting only those instances where error exceeds pre-defined limits (global or item-specific)
  • It can report based on position in hierarchy (all items, item-level or group-level)
  • It can report on customized subsets of the hierarchy based on defined Hot List(s)

Some Specifics on Designing an Archive vs. Actual Report

The Historical periods to consider section allows you select the number of historic periods to monitor. Setting “Periods to monitor” to 1 will monitor the most recent historic value only. Setting “Periods to monitor” to 2 will monitor the most recent historic value and the prior period, etc.

The Allowable deviation from history section allows you to set the sensitivity of the exception thresholds. Item-level thresholds allow you to assign different sensitivities to different items. The item-level thresholds must be defined in the project’s secondary file. Global thresholds use the same thresholds for all items.

The Comparison Basis section allows you to set the thresholds on a percent or unit basis and to specify a lead time or archive period to use. The “Lead time” and “Archive period” settings are only relevant if you are monitoring more than one historic point. They allow you to compare each historic point being monitored to either the corresponding forecast for a specific lead time or to forecasts made at a specific archive period (i.e., forecast origin).

The Show all option generates an entry for every specified item rather than only showing the items that fall outside the defined thresholds.

A Sample Report

The screenshot above shows the Exception Report Settings dialog box with the resulting Archive vs. Actual report in the background. Based on the chosen settings the report has been designed to:

  • Monitor against last month only — the most recent actual (Periods to monitor = 1).
  • Monitor the forecasts made three months ago for last month (Lead Time = 2).

All of the exception reports in Forecast Pro TRAC can also be filtered and/or sorted just as in an Excel spreadsheet.

Summary

Real time Forecast vs. Actual reporting—which centers on out-of-sample error—plays a central role in improving your forecast process, provides insight into expected performance and lets you spot problems early on.

To schedule a live WebEx demonstration of Forecast Pro TRAC click here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s