For most of us, the tendency toward optimism is unavoidable. And it's unlikely that companies can, or would even want to, remove the organizational pressures that promote optimism. Still, optimism can, and should, be tempered. Simply understanding the sources of overoptimism can help planners challenge assumptions, bring in alternative perspectives, and in general take a balanced view of the future.
But there's also a more formal way to improve the reliability of forecasts. Companies can introduce into their planning processes an objective forecasting method that counteracts the personal and organizational sources of optimism. We'll begin our exploration of this approach with an anecdote that illustrates both the traditional mode of forecasting and the suggested alternative.
In 1976, one of us was involved in a project to develop a curriculum for a new subject area for high schools in Israel. The project was conducted by a small team of academics and teachers. When the team had been operating for about a year and had some significant achievements under its belt, its discussions turned to the question of how long the project would take. Everyone on the team was asked to write on a slip of paper the number of months that would be needed to finish the projectdefined as having a complete report ready for submission to the Ministry of Education. The estimates ranged from eighteen to thirty months.
The inside view is the one that the expert and the other team members spontaneously adopted. |
One of the team membersa distinguished expert in curriculum developmentwas then posed a challenge by another team member: "Surely, we're not the only team to have tried to develop a curriculum where none existed before. Try to recall as many such projects as you can. Think of them as they were in a stage comparable to ours at present. How long did it take them at that point to reach completion?" After a long silence, the curriculum expert said, with some discomfort, "First, I should say that not all the teams that I can think of, that were at a comparable stage, ever did complete their task. About 40 percent of them eventually gave up. Of the remaining, I cannot think of any that completed their task in less than seven years, nor of any that took more than ten." He was then asked if he had reason to believe that the present team was more skilled in curriculum development than the earlier ones had been. "No," he replied, "I cannot think of any relevant factor that distinguishes us favorably from the teams I have been thinking about. Indeed, my impression is that we are slightly below average in terms of resources and potential." The wise decision at this point would probably have been for the team to disband. Instead, the members ignored the pessimistic information and proceeded with the project. They finally completed the initiative eight years later, and their efforts went largely for naughtthe resulting curriculum was rarely used.
In this example, the curriculum expert made two forecasts for the same problem and arrived at very different answers. We call these two distinct modes of forecasting the inside view and the outside view. The inside view is the one that the expert and all the other team members spontaneously adopted. They made forecasts by focusing tightly on the case at handconsidering its objective, the resources they brought to it, and the obstacles to its completion; constructing in their minds scenarios of their coming progress; and extrapolating current trends into the future. Not surprisingly, the resulting forecasts, even the most conservative ones, were exceedingly optimistic.
The outside view, also known as reference-class forecasting, is the one that the curriculum expert was encouraged to adopt. It completely ignored the details of the project at hand, and it involved no attempt at forecasting the events that would influence the project's future course. Instead, it examined the experiences of a class of similar projects, laid out a rough distribution of outcomes for this reference class, and then positioned the current project in that distribution. The resulting forecast, as it turned out, was much more accurate.
How to take the outside view
Making a forecast using the outside view requires planners to identify a reference class of analogous past initiatives, determine the distribution of outcomes for those initiatives, and place the project at hand at an appropriate point along that distribution. This effort is best organized into five steps: 1
1. Select a reference class. Identifying the right reference class involves both art and science. You usually have to weigh similarities and differences on many variables and determine which are the most meaningful in judging how your own initiative will play out. Sometimes that's easy. If you're a studio executive trying to forecast sales of a new film, you'll formulate a reference class based on recent films in the same genre, starring similar actors, with comparable budgets, and so on. In other cases, it's much trickier. If you're a manager at a chemical company that is considering building an olefin plant incorporating a new processing technology, you may instinctively think that your reference class would include olefin plants now in operation. But you may actually get better results by looking at other chemical plants built with new processing technologies. The plant's outcome, in other words, may be more influenced by the newness of its technology than by what it produces. In forecasting an outcome in a competitive situation, such as the market share for a new venture, you need to consider industrial structure and market factors in designing a reference class. The key is to choose a class that is broad enough to be statistically meaningful but narrow enough to be truly comparable to the project at hand.
The key is to choose a class that is broad enough to be statistically meaningful but narrow enough to be truly comparable to the project at hand. |
2. Assess the distribution of outcomes. Once the reference class is chosen, you have to document the outcomes of the prior projects and arrange them as a distribution, showing the extremes, the median, and any clusters. Sometimes you won't be able to precisely document the outcomes of every member of the class. But you can still arrive at a rough distribution by calculating the average outcome as well as a measure of variability. In the film example, for instance, you may find that the reference-class movies sold $40 million worth of tickets on average, but that 10 percent sold less than $2 million worth of tickets and 5 percent sold more than $120 million worth.
3. Make an intuitive prediction of your project's position in the distribution. Based on your own understanding of the project at hand and how it compares with the projects in the reference class, predict where it would fall along the distribution. Because your intuitive estimate will likely be biased, the final two steps are intended to adjust the estimate in order to arrive at a more accurate forecast.
4. Assess the reliability of your prediction. Some events are easier to foresee than others. A meteorologist's forecast of temperatures two days from now, for example, will be more reliable than a sportscaster's prediction of the score of next year's Super Bowl. This step is intended to gauge the reliability of the forecast you made in Step 3. The goal is to estimate the correlation between the forecast and the actual outcome, expressed as a coefficient between 0 and 1, where 0 indicates no correlation and 1 indicates complete correlation. In the best case, information will be available on how well your past predictions matched the actual outcomes. You can then estimate the correlation based on historical precedent. In the absence of such information, assessments of predictability become more subjective. You may, for instance, be able to arrive at an estimate of predictability based on how the situation at hand compares with other forecasting situations. To return to the movie example, say that you are fairly confident that your ability to predict the sales of films exceeds the ability of sportscasters to predict point spreads in football games but is not as good as the ability of weather forecasters to predict temperatures two days out. Through a diligent statistical analysis, you could construct a rough scale of predictability based on computed correlations between predictions and outcomes for football scores and temperatures. You can then estimate where your ability to predict film scores lies on this scale. When the calculations are complex, it may help to bring in a skilled statistician.
5. Correct the intuitive estimate. Due to bias, the intuitive estimate made in Step 3 will likely be optimisticdeviating too far from the average outcome of the reference class. In this final step, you adjust the estimate toward the average based on your analysis of predictability in Step 4. The less reliable the prediction, the more the estimate needs to be regressed toward the mean. Suppose that your intuitive prediction of a film's sales is $95 million and that, on average, films in the reference class do $40 million worth of business. Suppose further that you have estimated the correlation coefficient to be 0.6. The regressed estimate of ticket sales would be:
$95M + [0.6 ($40M-$95M)] = $62M
As you see, the adjustment for optimism will often be substantial, particularly in highly uncertain situations where predictions are unreliable.