Summing Up
Can Managers Afford to Ignore Predictions in Planning?
There is a healthy skepticism when it comes to the reliability of predictions as a basis for planning. Donald Kortalon, commenting on this month's column, cites a number of experts whose predictions have been badly off the mark, some with serious consequences to firms and society in general. Annemarie Scholberlev quoted John Kenneth Galbraith, who reminded us of a cause of poor predictions when he said "… those employed or self-employed who tell of the future … do not know and normally do not know that they do not know." Even predictions based on an analysis of trends can have a bad name. Beatrice Haurenson, commenting on trend analysis, said: "What did the analysis of 'Trends' pre-2007 tell us?" Tom Dolembo suggested another reason why predictions can mislead. In his words, "Predictions, even from highly reliable sources, are generally not believed unless they fit the company agenda."
Others maintained that planning without prediction is impossible. After all, a plan is a prediction. As Vishu put it, "Managers should … base plans on predictions. It forces them to be disciplined. They should not however be slaves to the plan or the prediction they are following and must ask how firm-wide competencies can be developed to handle changes in predictions." And there are situations in which predictions can be very helpful. Peter Everett commented: " … in contexts or categories that we have invested in significantly to understand deeply … I'll trust our predictions. And I'll trust the prediction more if it's (from) someone who has an unbiased 3rd party interest in the outcome." Paul Lepley added that "Before relying on a prediction, it's wise to know what information and assumptions went into the process, and to know how susceptible the conclusion is to small changes." Referring to an example of Royal Dutch Shell's early scenario planning efforts, Hugh Quick volunteered that "I was involved in Shell's Scenarios. One step in preparing them was to identify fairly reliable predictions from the guesses."
Others identified people as the weak point in the process. Adam Hartung commented that "… everyone should be evaluating trends and making forecasts they use to guide planning … That so few companies do this is a testament to your question …" Celso Maia added: "Predictions will improve … but wise application in most … cases still remains under XIII century style." As Arpit Goyal put it: "The whole system is centralized on two notions, 'uncertainty' and 'people's confidence.' I think the main problem is not with the system but with ability of the people handling this system."
Alternatives to the use of predictions in the customary planning process were not ignored. Jean-Christophe Khuries suggested one when he asked: "How do you manage a large corporation without … predictions? … Better, once you have done it for a while … The technique of continuous forecasting was there, we just needed to adopt and tailor it. … continuous forecasting/budgeting/planning requires more effort to think about the future than the former once-a-year event."
The ultimate dilemma faced by senior managers with responsibility for the use of prediction in planning was posed by Antonio Sarlandes. He said, as CEO, "Should the predictions come out right, and that does happen, and I choose to ignore them, I will be blamed by the shareholders, 'How could you!' So, no, managers should not bother (listening to predictions), but yes, they may have to bother." The dilemma he poses requires that we consider this question: Can managers afford to ignore predictions in their planning efforts? What do you think?
Original Article
We're uncertain these days about many things—the future of the European economy and the EU itself, the "fiscal cliff" in the US, and the politics of the Middle East among them. Many corporate and personal plans are on hold. These may be uncertain times, but we've always been certain about two things: (1) predictions are never accurate, and (2) plans are obsolete the moment they are made.
Three books, two just out, provide provocative perspectives on the challenge. The first, by Nate Silver, The Signal and the Noise: Why Most Predictions Fail but Some Don't, argues that methods for more accurate predictions are on the way. The second, by Nassim Nicholas Taleb, titled The Black Swan: The Impact of the Highly Improbable, essentially argues that the most important things are not predictable anyway, so why obsess about predictions.
Silver's predictions of specific outcomes in the 2008 and 2012 US elections were among the most accurate. He argues that predictions fail because they too often are based on the wrong things, what he calls the "noise" (short-term stock market action) rather than the "signal" embedded in the noise (long-term secular trends in the market). In his view, the problem may grow as the era of Big Data produces more and more noise. Predictors badly mistake self-confidence for competence in data analysis. He argues for predictions: (1) expressed in terms of probabilities (as in Bayesian statistical methods and weather forecasts), (2) revisited and revised as frequently as necessary, and (3) based on a consensus among predictors.
Taleb, in The Black Swan, worried that cataclysmic events (Sandy) are next to impossible to predict and too costly to prepare for, given the small probability that they will ever occur. Human beings by and large do not have knowledge that is sufficiently general to comprehend signals that something big might happen, even though they think they do. We therefore continue to concentrate on matters irrelevant to Black Swans ("noise"?) and also fail to develop the humility that Silver values. Although Taleb makes recommendations for how to correct these shortcomings, one gets the idea that he really doesn't believe we can follow them.
Organizations have sought work arounds to deal with our inability to predict. For example, scenario planning was developed in the Royal Dutch Shell organization that has been adopted by many others. Rather than make one plan, the process produces at least three plans, one based on "best case," one based on "worst case," and one representing "best guess." One could argue that the process itself has a degree of humility built into it. Fashion retailers speed up the replenishment process to substitute fast responses to customer demand for inaccurate predictions.
Taleb himself, in a new book, Antifragile: Things That Gain from Disorder, argues that organizations and individuals should build antifragile mechanisms, "anything that has more upside than downside from random events," into strategies rather than trying to predict the unpredictable. Predictions often end up as a basis for strategic planning, which he dismisses as "superstitious babble" often carried out in large organizations whose very size exposes them to "disproportionate fragility to Black Swans." Instead, he favors so-called "barbell strategies" in which a portion of the business portfolio is created to guarantee survival in the event of a Black Swan so that the more innovative and entrepreneurial remainder of the portfolio can be designed to benefit from it (at a small risk of total loss). And if predictions have to be made, he wants them made by people with "skin in the game."
In the future, will predictions improve as we develop new methods to sort out the noise from the signal? Or will the growth of Big Data outpace the development of methods to detect the signal? Given our inability to predict, should managers bother to base plans on predictions? What do you think?
To Read More
:Nate Silver, The Signal and the Noise: Why Most Predictions Fail but Some Don't (New York: The Penguin Press, 2012)
Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007)
Nassim Nicholas Taleb, Antifragile: Things That Gain from Disorder (New York: Random House, 2012)
However, as a person who was involved in corporate planning in the 1960s, I know that Xerox and Lockheed (among others) presenting several papers at conferences. My work with a division of GM starting in 1968 included "best, worst, and most probable" in combination with Monte Carlo computer simulations.
Moving on to the rest of the article, I suggest there are different uses for planning. One is to look for the most likely outcome (presidential election); another is to protect against risks (black swans). The methodology needs to be tailored for the use.
Planning will never be perfect, but to ignore it is a mistake. Before relying on a prediction, its wise to know what information and assumptions went into the process, and to know how susceptible the conclusion is to small changes (also known as perturbation analysis).
Silver's example shows me that you can only have a really accurate prediction if you have mastery of the details driving outcomes.
I believe the application to my business is that in contexts or categories that we have invested significantly to understand deeply (in a Big Data sort of way...), I'll trust our predictions. And I'll trust the prediction more if it's someone who has an unbiased 3rd party interest in the outcome. In new contexts or new categories where I don't have robust data, it's probably a better idea to get in the market but focus on limiting risk ... so that you can absorb a wide range of potential outcomes and hopefully capture the upside if the business idea is a success.
B) Development of methods to detect signals will keep pace. May not keep pace uniformly and will be a competitive advantage for those who develop good methods.
C) Managers should bother to base plans on predictions. It forces them to be disciplined. They should not however be slaves to the plan or the prediction they are following and must ask how firm-wide competencies can be developed to handle changes in predictions.
1. Forecasting is very difficult, especially when it's about the future
2. Give them a number, or a date, but never both.
3. If you're ever right, never let them forget it.
It's tongue in cheek....but should remind us that predictions, "Big Data" or not, should be viewed cautiously. We can always be surprised by unknowables or changes that develop in the blink of an eye.
The fact that the strategy, based on sound predictions which are based on sound assumptions in turn, subject to change and must be constantly monitored and adjusted with changing market characteristics is a different risk management altogether. It is somewhat in line with doing your best with the available means at hand, in this case information.
Furthermore, predictions help us categorize risks associated with execution stages allowing managers to set points of no return as well as measure the progress to success. Next, establishing predictions and thus assumptions engage managers to chase specific information and more importantly define the value of the information which in turn help determine what percentage of returns the manager should be willing to forgo to yield a +NPV.
I remember a professor commenting something about 40/70 rule to start implementing a plan. It says that by the time you have 40% of the information you should set the plan in motion and by the time 70% of information is achieved at the planning stage it is probably too late because of availability of the same information to competitors.
ing to Predictions," towards how do we manage when confronted with unpredictability in an environment that is marked by "continuous discontinuity" (Peter Drucker).
This is pertinent advice for the practicing manager: give priority to taking action that enhances the firm's resilience rather than building better predictive models. And how much confidence should the manager ever take in models, which do not and cannot incorporate situations that have not been seen in the past (e.g., Russia's default on her DOMESTIC debt in August 1998, which the many country risk models in use had not considered).
The Eighth Circle of the Fraudulent, Fourth Chasm
"The augerers, their faces twisted round, forced to walk backwards."
Big Data has not produced measurably better decisions. Predictions, even from highly reliable sources, are generally not believed unless they fit the company agenda. I'll go with neutral benefit. That darned butterfly just keeps flapping.
I fail to follow the argument, "Managers should bother to base plans on predictions. It forces them to be disciplined." What is the point of basing plans on flawed predictions in a "disciplined" manner? By the same token, I have difficulty in following the earlier point, "the same prediction over time will more clearly be right or wrong." The assumption here seems to be that the object to be predicted does not change over time - like an archer repeatedly aiming at an unmovable target. Our targets - customers, competitors, regulators, politicians - have the deplorable habit of changing with time.
The financial crisis that began in the US in 2007 which Taleb is said to have predicted, and its aftermath in Europe, is not due to some unexpected comet striking the earth. The crisis is an outcome not of poor prediction but of pursuit of profit by bad allocation of resources in the form of burdening the population of these countries with mountains of debt, to a point where it became increasingly difficult to repay it. The debt system created this crisis, not swans.
The antifragility thesis by Nassim Taleb is good for satisfying one's intellectual curiosity in applying probability theory but it makes no sense in real life. The reason is that gain from any crisis means someone else's loss from the same crisis, especially in a debt crisis such as we have had since 2007. This is a zero sum game at the level of the system as a whole with many losers matched to few winners, like Mr Taleb and other gamblers.
To sum up, company executives need to worry less about fantastic antifragility or black swan theories but should instead focus on running their companies ethically, with integrity, and in the interest of their employees and the societies in which they operate. They should do so not because they see themselves as gambling on rare events but as serving society in the best way they can. The totality of employees and workers in any company are the best predictors of the future as they are daily engaged in shaping it. Company executives must place less reliance on the ideas of gurus and celebrity theorists but should instead place their trust in the insights of their own employees and workers. In this way, company resources will always be properly allocated and a bright future will be enjoyed by the company that demonstrates trust in its employees and workers.
A long treatise, generally rewarding reading, however, not precisely addressing the core question for the discussion, "Should managers bother listening to predictions?" The purpose of the discussion is not to engage in a critique of the validity of the theses expressed Nassim Taleb's recent book. Hence, a large part of the commentary does not add to providing and answer to the discussion question.
The last third of the commentary reads like much that we publish, need to publish, in our annual reports, and we do honestly feel that we want to contribute to society. Still, that is not the purpose of the corporation. As Milton Friedman once said, today we would feel, politically incorrectly, "The business of business is business." My task as the leader of a public company is to grow shareholder wealth, and I would have severe difficulty explaining to the security analysts a drop in company performance with the argument made in the commentary, that we run the company, "in the interest of our employees and the societies in which we operate." My primary responsibility is to the owners of the company. Also, I question whether "the totality of employees and workers in my company are the best predictors of the future as they are daily engaged in shaping it," and that "company resources will always be properly allocated and a bright future will
be enjoyed by the company that demonstrates trust in its employees and workers." Indeed, these are highly noble words, but of debatable realism.
Predictions will improve.
Yes, absolutely. Should bother.
Predictions improve, but wise application in most of cases still remains under XIII century style. For this I rely on Robert Stenberg approach, that self oriented people based on self experiences and interpretation of truth rarely lead to overcome negative situations or create positive opportunities.
Predictions may not be so clear nor provide influence on decision making in business decision, but for government it is the other way round. Corporate planning & Public Policies & Government planning seems to be mutually exclusive for an expressive number of G50 countries.
Check point must be the quality of predictions, how they are communicated and how they are understood/interpreted. If they are as good as to the point one can take them seriously or as bad enough so just to be discarded.
Be best case, worst case, best guess, most corporate decisions are made based on this morning?s newspaper and/or yesterday?s cashflow.
"There is no likehood man can ever tap the power of the atom." -- Robert Millikan, Nobel Prize in Physics, 1923
"I think there is a world market for maybe five computers." -- Thomas Watson, chairman of IBM, 1943
"640K ought to be enough for anybody." -- Bill Gates, 1981
"The concept is interesting and well-formed, but in order to earn better than a 'C,' the idea must be feasible." -- A Yale University management professor in response to Fred Smith's paper proposing reliable overnight delivery service. (Smith went on to found Federal Express Corp.)
"We don't like their sound, and guitar music is on the way out." -- Decca Recording Co. rejecting the Beatles, 1962.
"Stocks have reached what looks like a permanently high plateau." -- Irving Fisher, Professor of Economics, Yale University, 1929.
"There is no reason anyone would want a computer in their home." -- Ken Olson, president, chairman and founder of Digital Equipment Corp., 1977
These statements could be taken as mildly amusing if behind them were not damaging decisions of managers who believed in these predictions.
I apply this ideation to the functionality of prognostication, in which stated predictions can (I believe) affect decision making positively or adversely depending on a number of variables affecting the hearer. if, for example, I am bombarded with predictions of a particular political candidate's "sure" or even probable win, does this influence affect my voting choice; consciously or sub-consciously, thereby effecting the predictors prediction?
I think the trade-off is between focus and flexibility in strategic planning.
Professor Michael Porter in his work on strategy likes to emphasise that relentless focus is the key to deliver on a chosen strategy/strategic choice and to have a sustainable competitive advantage in the long run. However, in the same vein one he stresses strategic flexiblity. Similar to the notion of humility suggested above. Successful organisations, which have thrived over the long run, have been constant innovators but also relentless in delivering on their gameplans. It's almost like these are paradoxical entities capable of managing uncertainty as an art rather than a science.
Big data does provide opportunities just as access databases, google trends and other such mechanisms have done in the past. It will give us a slightly better piece of the grand puzzle of understanding what is happening and what behavioural patterns are emerging. However, we must be careful not to confuse computing grunt and analytical sophistication with interpretive nous. Knowledge and information is quite distinct from wisdom and wisdom stubborn as she is has no masters.
Something former Secretary of Defense Donald Rumsfeld (derived from Nobel Laureate Frank Knight) said is quite pertinent:
There are known knowns; there are things we know that we know.
There are known unknowns; that is to say there are things that, we now know we don't know.
But there are also unknown unknowns - there are things we do not know we don't know. "
So the unknown unknowns will always exist, just our attitude towards them needs to be updated or refreshed periodically (every financial quarter perhaps!).
1) Peter Drucker said "the best way to predict the future is to create it." In other words, create the new rules of the future rather than wait for someone else to (and then adapt). Be proactive and gain control over how the future evolves.
2) Although unpredictable causes may be infinite and unknowable, their impact on businesses is quite finite. There are only a handful of positive or negative outcomes (e.g., product obsolescence, supply disruption, competitive pressure, financial collapse, markets opening/closing, etc.). If you prepare for the finite potential outcomes, then you are ready no matter what happens. You just need enough forecasting to predict what is the next of the finite outcomes is to occur (and that does not need to look out very far).
3) When I look at trends, I ask 3 simple questions:
a) Direction - Is it going up or down?
b) Magnitude - Is it moving a lot or a little?
c) Speed - Is it happening quickly or slowly?
That's as precise as you need to be (and gives enough information to make a decision). Any more precision is a joke (and won't change the decision).
4) We know that all strategic initiates will eventually fail as they drift out of sync with a changing environment. Therefore, the goal of forecasting is not to know the future, but to:
a) Help us to know when it is time to shift strategies.
b) Help us to know how to create a new strategy which will be in sync with where the environment is shifting.
Again, that does not need a lot of precision (especially if you new strategy incorporates part of point #1).
So forecasting has value, but the value is not enhanced by sweating over precision.
Its not always easy to accept any piece of predictions available as they may mislead at times. If managers depend much on such information, Competitors may take advantage and announce more uncertainties or threats just to disturb the implementation of our planned strategies.
tions (sales forecasts) are obsolete the moment they are published (externally/internally) because the act of publishing change the very universe they were based. However, dynamic plans with periodical reviews would not be obsolete the moment they are published (externally/internally).
Good predictions are based on past systemic information and they cannot be made in the absence of past systemic information or rapidly changing (highly variable: a data transformation could reduce the variability) systemic information. Also, data modelers who would not base their predictions on reasonable and timely assumptions and not take all statistical safe guards to fit the models to the data would not make good predictions. Good predictive models are parsimonious and they do not include insignificant variables (noise). Big data increase the possibility of finding a parsimonious predictive model because it gives the opportunity to fit many different models to the data in search of the best fitting model. Some catastrophic events are difficult to predict but it is incorrect to conclude that we should not attempt to predict such events, which could become more accurate with the availability of big (more accurate and more precise) data, better forecasting methods (to come)
and better understanding of the underlying systems. All predictions have limitations but some are useful. Good predictions are made with alpha errors that favor the agency/society: if the prediction is incorrect, it will not harm the agency /society.
Fiscal cliff: raising/not raising corporate/personal taxes and controlling/not controlling government expenditure/entitlements will have adverse/advantageous supply side and demand side microeconomic effects. Hence, a middle ground (compromise) shall be strategically reached for economic expansion, job creation and deficit reduction.
It tears our hearts to hear about Sandy Hook tragedy. Extreme social violence may be a function of the main effects of availability of high impact weapons, availability or affordability of metal health services and built up extreme social and self hatred and their significant interactions.
Mars does not have an atmosphere because it has lost its polarity. Hence, colonization of Mars shall start with some method/device (mega scheme) to establish its polarity. Once a thriving human colony in Mars is established, a "Black Swan" could not destroy humanity in the solar system even if humans would get wiped out (hope not) on earth.
Controllable and not controllable factors play important role in forward planning predictions. Natural calamities, for instance, can shatter all predictions.
In my view predictions, though necessary, are a gamble to be played with extreme caution.
I think the main problem is not with the system but with ability of the people handling this system. Why can't we make this "fiscal cliff" a part of our long-term planning i.e. believing that it would occur at certain time period and developing methods which could predict how big/small the "fiscal cliff" can be at a particular duration.
In a nutshell creating a "feedback mechanism" which could explore the possibility of a downfall, for say every year.
Why am I suggesting this?
Because: There will always be certain amount of uncertainty and we might never know how big/small that particular "uncertainty" is. Therefore, there is need for a fundamentally strong "wall" (here, by "wall" I mean temporary structure/laws/regulations") which we could stabilize our markets.
This way people would be ready and prepared and thus the after-effects are less.
te your models and results?)
As you have said "the danger of constructing spurious correlations" is ever present in every predictive model not just in models built with Big Data. If you have validated the model then you have limited your probability of making a false positive to the alpha value. If you have not validated the model then you may have assumed (assed you and me) the model is valid when it is not and you are not protected for a false positive with the alpha value. Professor Nassim Taleb predicted the financial crisis in 2008 before many other economists. He is a top notch economist but I wonder how much of a statistical frame he was thinking in when he wrote his newest book.
I do have the tools (along with a broad skill set) and the frame of mind needed to make Big Data work and I would entertain an opportunity. Please check my LinkedIn profile for details. I can be contacted by e-mail: rpedirisooriya@netzero.com