FORECASTING



Forecasting 10
Photo by: ronfromyork

Forecasting can be broadly considered as a method or a technique for estimating many future aspects of a business or other operation. There are numerous techniques that can be used to accomplish the goal of forecasting. For example, a retailing firm that has been in business for 25 years can forecast its volume of sales in the coming year based on its experience over the 25-year period—such a forecasting technique bases the future forecast on the past data.

While the term "forecasting" may appear to be rather technical, planning for the future is a critical aspect of managing any organization—business, nonprofit, or other. In fact, the long-term success of any organization is closely tied to how well the management of the organization is able to foresee its future and to develop appropriate strategies to deal with likely future scenarios. Intuition, good judgment, and an awareness of how well the economy is doing may give the manager of a business firm a rough idea (or "feeling") of what is likely to happen in the future. Nevertheless, it is not easy to convert a feeling about the future into a precise and useful number, such as next year's sales volume or the raw material cost per unit of output. Forecasting methods can help estimate many such future aspects of a business operation.

Suppose that a forecast expert has been asked to provide estimates of the sales volume for a particular product for the next four quarters. One can easily see that a number of other decisions will be affected by the forecasts or estimates of sales volumes provided by the forecaster. Clearly, production schedules, raw material purchasing plans, policies regarding inventories, and sales quotas will be affected by such forecasts. As a result, poor forecasts or estimates may lead to poor planning and thus result in increased costs to the business.

How should one go about preparing the quarterly sales volume forecasts? One will certainly want to review the actual sales data for the product in question for past periods. Suppose that the forecaster has access to actual sales data for each quarter over the 25year period the firm has been in business. Using these historical data, the forecaster can identify the general level of sales. He or she can also determine whether there is a pattern or trend, such as an increase or decrease in sales volume over time. A further review of the data may reveal some type of seasonal pattern, such as peak sales occurring before a holiday. Thus by reviewing historical data over time, the forecaster can often develop a good understanding of the previous pattern of sales. Understanding such a pattern can often lead to better forecasts of future sales of the product. In addition, if the forecaster is able to identify the factors that influence sales, historical data on these factors (or variables) can also be used to generate forecasts of future sales volumes.

FORECASTING METHODS

All forecasting methods can be divided into two broad categories: qualitative and quantitative. Many forecasting techniques use past or historical data in the form of time series. A time series is simply a set of observations measured at successive points in time or over successive periods of time. Forecasts essentially provide future values of the time series on a specific variable such as sales volume. Division of forecasting methods into qualitative and quantitative categories is based on the availability of historical time series data.

QUALITATIVE FORECASTING METHODS

Qualitative forecasting techniques generally employ the judgment of experts in the appropriate field to generate forecasts. A key advantage of these procedures is that they can be applied in situations where historical data are simply not available. Moreover, even when historical data are available, significant changes in environmental conditions affecting the relevant time series may make the use of past data irrelevant and questionable in forecasting future values of the time series. Consider, for example, that historical data on gasoline sales are available. If the government then implemented a gasoline rationing program, changing the way gasoline is sold, one would have to question the validity of a gasoline sales forecast based on the past data. Qualitative forecasting methods offer a way to generate forecasts in such cases. Three important qualitative forecasting methods are: the Delphi technique, scenario writing, and the subject approach.

DELPHI TECHNIQUE.

In the Delphi technique, an attempt is made to develop forecasts through "group consensus." Usually, a panel of experts is asked to respond to a series of questionnaires. The experts, physically separated from and unknown to each other, are asked to respond to an initial questionnaire (a set of questions). Then, a second questionnaire is prepared incorporating information and opinions of the whole group. Each expert is asked to reconsider and to revise his or her initial response to the questions. This process is continued until some degree of consensus among experts is reached. It should be noted that the objective of the Delphi technique is not to produce a single answer at the end. Instead, it attempts to produce a relatively narrow spread of opinions—the range in which opinions of the majority of experts lie.

SCENARIO WRITING.

Under this approach, the forecaster starts with different sets of assumptions. For each set of assumptions, a likely scenario of the business outcome is charted out. Thus, the forecaster would be able to generate many different future scenarios (corresponding to the different sets of assumptions). The decision maker or businessperson is presented with the different scenarios, and has to decide which scenario is most likely to prevail.

SUBJECTIVE APPROACH.

The subjective approach allows individuals participating in the forecasting decision to arrive at a forecast based on their subjective feelings and ideas. This approach is based on the premise that a human mind can arrive at a decision based on factors that are often very difficult to quantify. "Brainstorming sessions" are frequently used as a way to develop new ideas or to solve complex problems. In loosely organized sessions, participants feel free from peer pressure and, more importantly, can express their views and ideas without fear of criticism. Many corporations in the United States have started to increasingly use the subjective approach.

QUANTITATIVE FORECASTING
METHODS

Quantitative forecasting methods are used when historical data on variables of interest are available—these methods are based on an analysis of historical data concerning the time series of the specific variable of interest and possibly other related time series. There are two major categories of quantitative forecasting methods. The first type uses the past trend of a particular variable to base the future forecast of the variable. As this category of forecasting methods simply uses time series on past data of the variable that is being forecasted, these techniques are called time series methods.

The second category of quantitative forecasting techniques also uses historical data. But in forecasting future values of a variable, the forecaster examines the cause-and-effect relationships of the variable with other relevant variables such as the level of consumer confidence, changes in consumers' disposable incomes, the interest rate at which consumers can finance their spending through borrowing, and the state of the economy represented by such variables as the unemployment rate. Thus, this category of forecasting techniques uses past time series on many relevant variables to produce the forecast for the variable of interest. Forecasting techniques falling under this category are called causal methods, as the basis of such forecasting is the cause-and-effect relationship between the variable forecasted and other time series selected to help in generating the forecasts.

TIME SERIES METHODS OF FORECASTING.

Before discussing time series methods, it is helpful to understand the behavior of time series in general terms. Time series are comprised of four separate components: trend component, cyclical component, seasonal component, and irregular component. These four components are viewed as providing specific values for the time series when combined.

In a time series, measurements are taken at successive points or over successive periods. The measurements may be taken every hour, day, week, month, or year, or at any other regular (or irregular) interval. While most time series data generally display some random fluctuations, the time series may still show gradual shifts to relatively higher or lower values over an extended period. The gradual shifting of the time series is often referred to by professional forecasters as the trend in the time series. A trend emerges due to one or more long-term factors, such as changes in population size, changes in the demographic characteristics of population, and changes in tastes and preferences of consumers. For example, manufacturers of automobiles in the United States may see that there are substantial variations in automobile sales from one month to the next. But, in reviewing auto sales over the past 15 to 20 years, the automobile manufacturers may discover a gradual increase in annual sales volume. In this case, the trend for auto sales is increasing over time. In another example, the trend may be decreasing over time. Professional forecasters often describe an increasing trend by an upward sloping straight line and a decreasing trend by a downward sloping straight line. Using a straight line to represent a trend, however, is a mere simplification—in many situations, nonlinear trends may more accurately represent the true trend in the time series.

Although a time series may often exhibit a trend over a long period, it may also display alternating sequences of points that lie above and below the trend line. Any recurring sequence of points above and below the trend line that last more than a year is considered to constitute the cyclical component of the time series—that is, these observations in the time series deviate from the trend due to cyclical fluctuations (fluctuations that repeat at intervals of more than one year). The time series of the aggregate output in the economy (called the real gross domestic product) provides a good example of a time series that displays cyclical behavior. While the trend line for gross domestic product (GDP) is upward sloping, the output growth displays a cyclical behavior around the trend line. This cyclical behavior of GDP has been dubbed business cycles by economists.

The seasonal component is similar to the cyclical component in that they both refer to some regular fluctuations in a time series. There is one key difference, however. While cyclical components of a time series are identified by analyzing multiyear movements in historical data, seasonal components capture the regular pattern of variability in the time series within one-year periods. Many economic variables display seasonal patterns. For example, manufacturers of swimming pools experience low sales in fall and winter months, but they witness peak sales of swimming pools during spring and summer months. Manufacturers of snow removal equipment, on the other hand, experience the exactly opposite yearly sales pattern. The component of the time series that captures the variability in the data due to seasonal fluctuations is called the seasonal component.

The irregular component of the time series represents the residual left in an observation of the time series once the effects due to trend, cyclical, and seasonal components are extracted. Trend, cyclical, and seasonal components are considered to account for systematic variations in the time series. 'h e irregular component thus accounts for the random variability in the time series. The random variations in the time series are, in turn, caused by short-term, unanticipated and nonrecurring factors that affect the time series. The irregular component of the time series, by nature, cannot be predicted in advance.

TIME SERIES FORECASTING USING SMOOTHING METHODS.

Smoothing methods are appropriate when a time series displays no significant effects of trend, cyclical, or seasonal components (often called a stable time series). In such a case, the goal is to smooth out the irregular component of the time series by using an averaging process. Once the time series is smoothed, it is used to generate forecasts.

The moving averages method is probably the most widely used smoothing technique. In order to smooth the time series, this method uses the average of a number of adjoining data points or periods. This averaging process uses overlapping observations to generate averages. Suppose a forecaster wants to generate three-period moving averages. The forecaster would take the first three observations of the time series and calculate the average. Then, the forecaster would drop the first observation and calculate the average of the next three observations. This process would continue until three-period averages are calculated based on the data available from the entire time series. The term "moving" refers to the way averages are calculated—the forecaster moves up or down the time series to pick observations to calculate an average of a fixed number of observations. In the three-period example, the moving averages method would use the average of the most recent three observations of data in the time series as the forecast for the next period. This forecasted value for the next period, in conjunction with the last two observations of the historical time series, would yield an average that can be used as the forecast for the second period in the future.

The calculation of a three-period moving average can be illustrated as follows. Suppose a forecaster wants to forecast the sales volume for American-made automobiles in the United States for the next year. The sales of American-made cars in the United States during the previous three years were: 1.3 million, 900,000, and 1.1 million (the most recent observation is reported first). The three-period moving average in this case is 1.1 million cars (that is: [(1.3 + 0.90 + 1.1)/3 = 1.1]). Based on the three-period moving averages, the forecast may predict that 1.1 million American-made cars are most likely to be sold in the United States the next year.

In calculating moving averages to generate forecasts, the forecaster may experiment with different-length moving averages. The forecaster will choose the length that yields the highest accuracy for the forecasts generated.

" It is important that forecasts generated not be too far from the actual future outcomes. In order to examine the accuracy of forecasts generated, forecasters generally devise a measure of the forecasting error (that is, the difference between the forecasted value for a period and the associated actual value of the variable of interest). Suppose retail sales volume for American-made automobiles in the United States is forecast to be 1.1 million cars for a given year, but only I million cars are actually sold that year. The forecast error in this case is equal 100,000 cars. In other words, the forecaster overestimated the sales volume for the year by 100,000. Of course, forecast errors will sometimes be positive, and at other times be negative. Thus, taking a simple average of forecast errors over time will not capture the true magnitude of forecast errors; large positive errors may simply cancel out large negative errors, giving a misleading impression about the accuracy of forecasts generated. As a result, forecasters commonly use the mean squares error to measure the forecast error. The mean squares error, or the MSE, is the average of the sum of squared forecasting errors. This measure, by taking the squares of forecasting errors, eliminates the chance of negative and positive errors canceling out.

In selecting the length of the moving averages, a forecaster can employ the MSE measure to determine the number of values to be included in calculating the moving averages. The forecaster experiments with different lengths to generate moving averages and then calculates forecast errors (and the associated mean squares errors) for each length used in calculating moving averages. Then, the forecaster can pick the length that minimizes the mean squared error of forecasts generated.

Weighted moving averages are a variant of moving averages. In the moving averages method, each observation of data receives the same weight. In the weighted moving averages method, different weights are assigned to the observations on data that are used in calculating the moving averages. Suppose, once again, that a forecaster wants to generate three-period moving averages. Under the weighted moving averages method, the three data points would receive different weights before the average is calculated. Generally, the most recent observation receives the maximum weight, with the weight assigned decreasing for older data values.

The calculation of a three-period weighted moving average can be illustrated as follows. Suppose, once again, that a forecaster wants to forecast the sales volume for American-made automobiles in the United States for the next year. The sales of American-made cars for the United States during the previous three years were: 1.3 million, 900,000, and 1.1 million (the most recent observation is reported first). One estimate of the weighted three-period moving average in this example can be equal to 1.133 million cars (that is, [ 1(3/6) x (1.3) + (2/6) x (0.90) + (1/6) x (1.1)}/ 3 = 1.133 ]). Based on the three-period weighted moving averages, the forecast may predict that 1.133 million American-made cars are most likely to be sold in the United States in the next year. The accuracy of weighted moving averages forecasts are determined in a manner similar to that for simple moving averages.

Exponential smoothing is somewhat more difficult mathematically. In essence, however, exponential smoothing also uses the weighted average concept—in the form of the weighted average of all past observations, as contained in the relevant time series—to generate forecasts for the next period. The term "exponential smoothing" comes from the fact that this method employs a weighting scheme for the historical values of data that is exponential in nature. In ordinary terms, an exponential weighting scheme assigns the maximum weight to the most recent observation and the weights decline in a systematic manner as older and older observations are included. The accuracies of forecasts using exponential smoothing are determined in a manner similar to that for the moving averages method.

TIME SERIES FORECASTING USING TREND PROJECTION.

This method uses the underlying long-term trend of a time series of data to forecast its future values. Suppose a forecaster has data on sales of American-made automobiles in the United States for the last 25 years. The time series data on U.S. auto sales can be plotted and examined visually. Most likely, the auto sales time series would display a gradual growth in the sales volume, despite the "up" and "down" movements from year to year. The trend may be linear (approximated by a straight line) or nonlinear (approximated by a curve or a nonlinear line). Most often, forecasters assume a linear trend—of course, if a linear trend is assumed when, in fact, a nonlinear trend is present, this misrepresentation can lead to grossly inaccurate forecasts. Assume that the time series on American-made auto sales is actually linear and thus it can be represented by a straight line. Mathematical techniques are used to find the straight line that most accurately represents the time series on auto sales. This line relates sales to different points over time. If we further assume that the past trend will continue in the future, future values of the time series (forecasts) can be inferred from the straight line based on the past data. One should remember that the forecasts based on this method should also be judged on the basis of a measure of forecast errors. One can continue to assume that the forecaster uses the mean squares error discussed earlier.

TIME SERIES FORECASTING USING TREND AND SEASONAL COMPONENTS.

This method is a variant of the trend projection method, making use of the seasonal component of a time series in addition to the trend component. This method removes the seasonal effect or the seasonal component from the time series. This step is often referred to as de-seasonalizing the time series.

Once a time series has been de-seasonalized it will have only a trend component. The trend projection method can then be employed to identify a straight line trend that represents the time series data well. Then, using this trend line, forecasts for future periods are generated. The final step under this method is to reincorporate the seasonal component of the time series (using what is known as the seasonal index) to adjust the forecasts based on trend alone. In this manner, the forecasts generated are composed of both the trend and seasonal components. One will normally expect these forecasts to be more accurate than those that are based purely on the trend projection.

CAUSAL METHOD OF FORECASTING.

As mentioned earlier, causal methods use the cause-and-effect relationship between the variable whose future values are being forecasted and other related variables or factors. The widely known causal method is called regression analysis, a statistical technique used to develop a mathematical model showing how a set of variables are related. This mathematical relationship can be used to generate forecasts. In the terminology used in regression analysis contexts, the variable that is being forecasted is called the dependent or response variable. The variable or variables that help in forecasting the values of the dependent variable are called the independent or predictor variables. Regression analysis that employs one dependent variable and one independent variable and approximates the relationship between these two variables by a straight line is called a simple linear regression. Regression analysis that uses two or more independent variables to forecast values of the dependent variable is called a multiple regression analysis. Below, the forecasting technique using regression analysis for the simple linear regression case is briefly introduced.

Suppose a forecaster has data on sales of American-made automobiles in the United States for the last 25 years. The forecaster has also identified that the sale of automobiles is related to individuals' real disposable income (roughly speaking, income after income taxes are paid, adjusted for the inflation rate). The forecaster also has available the time series (for the last 25 years) on the real disposable income. The time series data on U.S. auto sales can be plotted against the time series data on real disposable income, so it can be examined visually. Most likely, the auto i sales time series would display a gradual growth in sales volume as real disposable income increases, despite the occasional lack of consistency—that is, at times, auto sales may fall even when real disposable income rises. The relationship between the two variables (auto sales as the dependent variable and real disposable income as the independent variable) may be linear (approximated by a straight line) or nonlinear (approximated by a curve or a nonlinear line). Assume that the relationship between the time series on sales of American-made automobiles and real disposable income of consumers is actually linear and can thus be represented by a straight line.

A fairly rigorous mathematical technique is used to find the straight line that most accurately represents the relationship between the time series on auto sales and disposable income. The intuition behind the mathematical technique employed in arriving at the appropriate straight line is as follows. Imagine that the relationship between the two time series has been plotted on paper. The plot will consist of a scatter (or cloud) of points. Each point in the plot represents a pair of observations on auto sales and disposable income (that is, auto sales corresponding to the given level of the real disposable income in any year). The scatter of points (similar to the time series method discussed above) may have an upward or a downward drift. That is, the relationship between auto sales and real disposable income may be approximated by an upward or downward sloping straight line. In all likelihood, the regression analysis in the present example will yield an upward sloping straight line—as disposable income increases so does the volume of automobile sales.

Arriving at the most accurate straight line is the key. Presumably, one can draw many straight lines through the scatter of points in the plot. Not all of them, however, will equally represent the relationship—some will be closer to most points, and others will be way off from most points in the scatter. Regression analysis then employs a mathematical technique. Different straight lines are drawn through the data. Deviations of the actual values of the data points in the plot from the corresponding values indicated by the straight line chosen in any instance are examined. The sum of the squares of these deviations captures the essence of how close a straight line is to the data points. The line with the minimum sum of squared deviations (called the "least squares" regression line) is considered the line of the best fit.

Having identified the regression line, and assuming that the relationship based on the past data will continue, future values of the dependent variable (forecasts) can be inferred from the straight line based on the past data. If the forecaster has an idea of what the real disposable income may be in the coming year, a forecast for future auto sales can be generated. One should remember that forecasts based on this method should also be judged on the basis of a measure of forecast errors. One can continue to assume that the forecaster uses the mean squares error discussed earlier. In addition to using forecast errors, regression analysis uses additional ways of analyzing the effectiveness of the estimated regression line in forecasting.

[ Anandi P. Sahu , Ph.D. ]

FURTHER READING:

Anderson, David R., Dennis J. Sweeney, and Thomas A. Williams. An Introduction to Management Science: Quantitative Approaches to Decision Making. 8th ed. Minneapolis/St. Paul: West Publishing, 1997.

——. Statistics for Business and Economics. 7th ed. Cincinnati: SouthWestern College Publishing, 1999.



Also read article about Forecasting from Wikipedia

User Contributions:

Comment about this article, ask questions, or add new information about this topic: