119 research outputs found

    Further evidence on game theory, simulated interaction, and unaided judgement for forecasting decisions in conflicts.

    Get PDF
    If people in conflicts can more accurately forecast how others will respond, that should help them to make better decisions. Contrary to expert expectations, earlier research found game theorists' forecasts were less accurate than forecasts from simulated interactions using student role players. To assess whether the game theorists had been disadvantaged by the selection of conflicts, I obtained forecasts for three new conflicts (an escalating international confrontation, a takeover battle in the telecommunications industry, and a personal grievance dispute) of types preferred by game theory experts. As before, students were used as role-players, and others provided forecasts using their judgement. When averaged across eight conflicts including five from earlier research, 102 forecasts by 23 game theorists were no more accurate (31% correct predictions) than 357 forecasts by students who used their unaided judgement (32%). Sixty-two percent of 105 simulated-interaction forecasts were accurate, providing an average error reduction of 47% over game-theorist forecasts. Forecasts can sometimes have value without being strictly accurate. Assessing the forecasts using the alternative criterion of usefulness led to the same conclusions about the relative merits of the methods.accuracy, conflict, forecasting, game theory, judgement, methods, role playing, simulated interaction.

    Assessing probabilistic forecasts about particular situations

    Get PDF
    How useful are probabilistic forecasts of the outcomes of particular situations? Potentially, they contain more information than unequivocal forecasts and, as they allow a more realistic representation of the relative likelihood of different outcomes, they might be more accurate and therefore more useful to decision makers. To test this proposition, I first compared a Squared-Error Skill Score (SESS) based on the Brier score with an Absolute-Error Skill Score (AESS), and found the latter more closely coincided with decision-makers’ interests. I then analysed data obtained in researching the problem of forecasting the decisions people make in conflict situations. In that research, participants were given lists of decisions that might be made and were asked to make a prediction either by choosing one of the decisions or by allocating percentages or relative frequencies to more than one of them. For this study I transformed the percentage and relative frequencies data into probabilistic forecasts. In most cases the participants chose a single decision. To obtain more data, I used a rule to derive probabilistic forecasts from structured analogies data, and transformed multiple singular forecasts for each combination of forecasting method and conflict into probabilistic forecasts. When compared using the AESS, probabilistic forecasts were not more skilful than unequivocal forecasts.accuracy, error measures, evaluation, forecasting methods, prediction

    Competitor-oriented Objectives: The Myth of Market Share

    Get PDF
    Competitor-oriented objectives, such as market-share targets, are promoted by academics and are commonly used by firms. A 1996 review of the evidence, summarized in this paper, indicated that competitor-oriented objectives reduce profitability. However, we found that this evidence has been ignored by managers. We then describe evidence from 12 new studies, one of which is introduced in this paper. This evidence supports the conclusion that competitor-oriented objectives are harmful, especially when managers receive information about market shares of competitors. Unfortunately, we expect that many firms will continue to use competitor-oriented objectives to the detriment of their profitability

    Demand Forecasting: Evidence-based Methods

    Get PDF
    We looked at evidence from comparative empirical studies to identify methods that can be useful for predicting demand in various situations and to warn against methods that should not be used. In general, use structured methods and avoid intuition, unstructured meetings, focus groups, and data mining. In situations where there are sufficient data, use quantitative methods including extrapolation, quantitative analogies, rule-based forecasting, and causal methods. Otherwise, use methods that structure judgement including surveys of intentions and expectations, judgmental bootstrapping, structured analogies, and simulated interaction. Managers' domain knowledge should be incorporated into statistical forecasts. Methods for combining forecasts, including Delphi and prediction markets, improve accuracy. We provide guidelines for the effective use of forecasts, including such procedures as scenarios. Few organizations use many of the methods described in this paper. Thus, there are opportunities to improve efficiency by adopting these forecasting practices.Accuracy, expertise, forecasting, judgement, marketing.

    Competitor-oriented Objectives: The Myth of Market Share

    Get PDF
    Competitor-oriented objectives, such as market-share targets, are promoted by academics and are common in business. A 1996 review of the evidence indicated that this violation of economic theory led to reduced profitability. We summarize the evidence as of 1996 then describe evidence from 12 new studies. All of the evidence supports the conclusion that competitor-oriented objectives are harmful. However, this evidence has had only a modest impact on academic research and it seems to be largely ignored by managers. Until this situation changes, we expect that many firms will continue to use competitor-oriented objectives to the detriment of their profitability.Competition, Market Share, Objectives, Profitability.

    Role thinking: Standing in other people’s shoes to forecast decisions in conflicts

    Get PDF
    Better forecasts of decisions in conflict situations, such as occur in business, politics, and war, can help protagonists achieve better outcomes. It is common advice to “stand in the other person’s shoes” when involved in a conflict, a procedure we refer to as “role thinking.” We tested this advice in order to assess the extent to which it can improve accuracy. Improvement in accuracy is important because prior research found that unaided judgment produced forecasts that were little better than guessing. We obtained 101 role-thinking forecasts from 27 Naval postgraduate students (experts) and 107 role-thinking forecasts from 103 second-year organizational behavior students (novices) of the decisions that would be made in nine diverse conflicts. The accuracy of the forecasts from the novices was 33% and of those from the experts 31%. The accuracy of the role-thinking forecasts was little different from chance, which was 28%. In contrast, when we asked groups of participants to each act as if they were in the shoes one of the protagonists, accuracy was 60%.combining; group decision-making; simulated interaction; unaided judgment

    Value of Expertise For Forecasting Decisions in Conflicts

    Get PDF
    In important conflicts, people typically rely on experts' judgments to predict the decisions that adversaries will make. We compared the accuracy of 106 expert and 169 novice forecasts for eight real conflicts. The forecasts of experts using unaided judgment were little better than those of novices, and neither were much better than simply guessing. The forecasts of experts with more experience were no more accurate than those with less. Speculating that consideration of the relative frequency of decisions might improve accuracy, we obtained 89 forecasts from novices instructed to assume there were 100 similar situations and to ascribe frequencies to decisions. Their forecasts were no more accurate than 96 forecasts from novices asked to pick the most likely decision. We conclude that expert judgment should not be used for predicting decisions that people will make in conflicts. Their use might lead decision makers to overlook other, more useful, approaches.Bad faith, Framing, Hindsight bias, Methods, Politics.

    Global warming: Forecasts by scientists versus scientific forecasts

    Get PDF
    In 2007, the Intergovernmental Panel on Climate Change’s Working Group One, a panel of experts established by the World Meteorological Organization and the United Nations Environment Programme, issued its Fourth Assessment Report. The Report included predictions of dramatic increases in average world temperatures over the next 92 years and serious harm resulting from the predicted temperature increases. Using forecasting principles as our guide we asked: Are these forecasts a good basis for developing public policy? Our answer is “no.” To provide forecasts of climate change that are useful for policy-making, one would need to forecast (1) global temperature, (2) the effects of any temperature changes, (3) the effects of alternative policies, and (4) whether the best policy would be successfully implemented. Proper forecasts of all four are necessary for rational policy making. The IPCC Report was regarded as providing the most credible long-term forecasts of global average temperatures by 31 of the 51 scientists and others involved in forecasting climate change who responded to our survey. We found no references to the primary sources of information on forecasting methods despite the fact these are easily available in books, articles, and websites. We audited the forecasting processes described in Chapter 8 of the IPCC’s WG1 Report to assess the extent to which they complied with forecasting principles. We found enough information to make judgments on 89 out of a total of 140 forecasting principles. The forecasting procedures that were described violated 72 principles. Many of the violations were, by themselves, critical. The forecasts in the Report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing. Research on forecasting has shown that experts’ predictions are not useful. We have been unable to identify any scientific forecasts of global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder.accuracy; audit; climate change; evaluation; expert judgment; mathematical models; public policy

    Structured analogies for forecasting

    Get PDF
    When people forecast, they often use analogies but in an unstructured manner. We propose a structured judgmental procedure that involves asking experts to list as many analogies as they can, rate how similar the analogies are to the target situation, and match the outcomes of the analogies with possible outcomes of the target. An administrator would then derive a forecast from the experts' information. We compared structured analogies with unaided judgments for predicting the decisions made in eight conflict situations. These were difficult forecasting problems; the 32% accuracy of the unaided experts was only slightly better than chance. In contrast, 46% of structured analogies forecasts were accurate. Among experts who were independently able to think of two or more analogies and who had direct experience with their closest analogy, 60% of forecasts were accurate. Collaboration did not improve accuracy.accuracy, analogies, collaboration, conflict, expert, forecasting, judgment.

    Polar Bear Population Forecasts: A Public-Policy Forecasting Audit

    Get PDF
    The extinction of polar bears by the end of the 21st century has been predicted and calls have been made to list them as a threatened species under the U.S. Endangered Species Act. The decision on whether or not to list rests upon forecasts of what will happen to the bears over the 21st Century. Scientific research on forecasting, conducted since the 1930s, has led to an extensive set of principles—evidence-based procedures—that describe which methods are appropriate under given conditions. The principles of forecasting have been published and are easily available. We assessed polar bear population forecasts in light of these scientific principles. Much research has been published on forecasting polar bear populations. Using an Internet search, we located roughly 1,000 such papers. None of them made reference to the scientific literature on forecasting. We examined references in the nine unpublished government reports that were prepared “…to Support U.S. Fish and Wildlife Service Polar Bear Listing Decision.” The papers did not include references to works on scientific forecasting methodology. Of the nine papers written to support the listing, we judged two to be the most relevant to the decision: Amstrup, Marcot and Douglas et al. (2007), which we refer to as AMD, and Hunter et al. (2007), which we refer to as H6 to represent the six authors. AMD’s forecasts were the product of a complex causal chain. For the first link in the chain, AMD assumed that General Circulation Models (GCMs) are valid. However, the GCM models are not valid as a forecasting method and are not reliable for forecasting at a regional level as being considered by AMD and H6, thus breaking the chain. Nevertheless, we audited their conditional forecasts of what would happen to the polar bear population assuming that the extent of summer sea ice will decrease substantially in the coming decades. AMD could not be rated against 26 relevant principles because the paper did not contain enough information. In all, AMD violated 73 of the 90 forecasting principles we were able to rate. They used two un-validated methods and relied on only one polar bear expert to specify variables, relationships, and inputs into their models. The expert then adjusted the models until the outputs conformed to his expectations. In effect, the forecasts were the opinions of a single expert unaided by forecasting principles. Based on research to date, approaches based on unaided expert opinion are inappropriate to forecasting in situations with high complexity and much uncertainty. Our audit of the second most relevant paper, H6, found that it was also based on faulty forecasting methodology. For example, it extrapolated nearly 100 years into the future on the basis of only five years of data – and data for these years were of doubtful validity. In summary, experts’ predictions, unaided by evidence-based forecasting procedures, should play no role in this decision. Without scientific forecasts of a substantial decline of the polar bear population and of net benefits from feasible policies arising from listing polar bears, a decision to list polar bears as threatened or endangered would be irresponsible.adaptation, bias, climate change, decision making, endangered species, expert opinion, evaluation, evidence-based principles, expert judgment, extinction, forecasting methods, global warming, habitat loss, mathematical models, scientific method, sea ice
    corecore