Can Managers Afford to Ignore Predictions in Planning?
There is a healthy skepticism when it comes to the reliability of predictions as a basis for planning. Donald Kortalon, commenting on this month's column, cites a number of experts whose predictions have been badly off the mark, some with serious consequences to firms and society in general. Annemarie Scholberlev quoted John Kenneth Galbraith, who reminded us of a cause of poor predictions when he said "… those employed or self-employed who tell of the future … do not know and normally do not know that they do not know." Even predictions based on an analysis of trends can have a bad name. Beatrice Haurenson, commenting on trend analysis, said: "What did the analysis of 'Trends' pre-2007 tell us?" Tom Dolembo suggested another reason why predictions can mislead. In his words, "Predictions, even from highly reliable sources, are generally not believed unless they fit the company agenda."
Others maintained that planning without prediction is impossible. After all, a plan is a prediction. As Vishu put it, "Managers should … base plans on predictions. It forces them to be disciplined. They should not however be slaves to the plan or the prediction they are following and must ask how firm-wide competencies can be developed to handle changes in predictions." And there are situations in which predictions can be very helpful. Peter Everett commented: " … in contexts or categories that we have invested in significantly to understand deeply … I'll trust our predictions. And I'll trust the prediction more if it's (from) someone who has an unbiased 3rd party interest in the outcome." Paul Lepley added that "Before relying on a prediction, it's wise to know what information and assumptions went into the process, and to know how susceptible the conclusion is to small changes." Referring to an example of Royal Dutch Shell's early scenario planning efforts, Hugh Quick volunteered that "I was involved in Shell's Scenarios. One step in preparing them was to identify fairly reliable predictions from the guesses."
Others identified people as the weak point in the process. Adam Hartung commented that "… everyone should be evaluating trends and making forecasts they use to guide planning … That so few companies do this is a testament to your question …" Celso Maia added: "Predictions will improve … but wise application in most … cases still remains under XIII century style." As Arpit Goyal put it: "The whole system is centralized on two notions, 'uncertainty' and 'people's confidence.' I think the main problem is not with the system but with ability of the people handling this system."
Alternatives to the use of predictions in the customary planning process were not ignored. Jean-Christophe Khuries suggested one when he asked: "How do you manage a large corporation without … predictions? … Better, once you have done it for a while … The technique of continuous forecasting was there, we just needed to adopt and tailor it. … continuous forecasting/budgeting/planning requires more effort to think about the future than the former once-a-year event."
The ultimate dilemma faced by senior managers with responsibility for the use of prediction in planning was posed by Antonio Sarlandes. He said, as CEO, "Should the predictions come out right, and that does happen, and I choose to ignore them, I will be blamed by the shareholders, 'How could you!' So, no, managers should not bother (listening to predictions), but yes, they may have to bother." The dilemma he poses requires that we consider this question: Can managers afford to ignore predictions in their planning efforts? What do you think?
We're uncertain these days about many things—the future of the European economy and the EU itself, the "fiscal cliff" in the US, and the politics of the Middle East among them. Many corporate and personal plans are on hold. These may be uncertain times, but we've always been certain about two things: (1) predictions are never accurate, and (2) plans are obsolete the moment they are made.
Three books, two just out, provide provocative perspectives on the challenge. The first, by Nate Silver, The Signal and the Noise: Why Most Predictions Fail but Some Don't, argues that methods for more accurate predictions are on the way. The second, by Nassim Nicholas Taleb, titled The Black Swan: The Impact of the Highly Improbable, essentially argues that the most important things are not predictable anyway, so why obsess about predictions.
Silver's predictions of specific outcomes in the 2008 and 2012 US elections were among the most accurate. He argues that predictions fail because they too often are based on the wrong things, what he calls the "noise" (short-term stock market action) rather than the "signal" embedded in the noise (long-term secular trends in the market). In his view, the problem may grow as the era of Big Data produces more and more noise. Predictors badly mistake self-confidence for competence in data analysis. He argues for predictions: (1) expressed in terms of probabilities (as in Bayesian statistical methods and weather forecasts), (2) revisited and revised as frequently as necessary, and (3) based on a consensus among predictors.
Taleb, in The Black Swan, worried that cataclysmic events (Sandy) are next to impossible to predict and too costly to prepare for, given the small probability that they will ever occur. Human beings by and large do not have knowledge that is sufficiently general to comprehend signals that something big might happen, even though they think they do. We therefore continue to concentrate on matters irrelevant to Black Swans ("noise"?) and also fail to develop the humility that Silver values. Although Taleb makes recommendations for how to correct these shortcomings, one gets the idea that he really doesn't believe we can follow them.
Organizations have sought work arounds to deal with our inability to predict. For example, scenario planning was developed in the Royal Dutch Shell organization that has been adopted by many others. Rather than make one plan, the process produces at least three plans, one based on "best case," one based on "worst case," and one representing "best guess." One could argue that the process itself has a degree of humility built into it. Fashion retailers speed up the replenishment process to substitute fast responses to customer demand for inaccurate predictions.
Taleb himself, in a new book, Antifragile: Things That Gain from Disorder, argues that organizations and individuals should build antifragile mechanisms, "anything that has more upside than downside from random events," into strategies rather than trying to predict the unpredictable. Predictions often end up as a basis for strategic planning, which he dismisses as "superstitious babble" often carried out in large organizations whose very size exposes them to "disproportionate fragility to Black Swans." Instead, he favors so-called "barbell strategies" in which a portion of the business portfolio is created to guarantee survival in the event of a Black Swan so that the more innovative and entrepreneurial remainder of the portfolio can be designed to benefit from it (at a small risk of total loss). And if predictions have to be made, he wants them made by people with "skin in the game."
In the future, will predictions improve as we develop new methods to sort out the noise from the signal? Or will the growth of Big Data outpace the development of methods to detect the signal? Given our inability to predict, should managers bother to base plans on predictions? What do you think?
To Read More:
Nate Silver, The Signal and the Noise: Why Most Predictions Fail but Some Don't (New York: The Penguin Press, 2012)
Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007)
Nassim Nicholas Taleb, Antifragile: Things That Gain from Disorder (New York: Random House, 2012)