249 research outputs found

    Insurance and safety after September 11, 2001: Coming to grips with the costs and threats of terrorism

    Get PDF
    This chapter, originally written as a consequence of the terrorist attacks of September 11, 2001, provides an elementary, everyday introduction to the concepts of risk and insurance. Conceptually, risk has two dimensions: a potential loss, and the chance of that loss being realized. People can, however, transfer risk to insurance companies against the payment of so-called premiums. In practice, however, one needs accurate assessments of both losses and probabilities to judge whether premiums are appropriate. For many risks, this poses little problem (e.g., life insurance); however, it is difficult to assess risks of many other kinds of events such as acts of terrorism. It is emphasized, that through evolution and learning, people are able to handle many of the common risks that they face in life. But when people lack experience (e.g., new technologies, threats of terrorism), risk can only be assessed through imagination. Not surprisingly, insurance companies demand high prices when risks are poorly understood. In particular, the cost of insurance against possible acts of terrorism soared after September 11. How should people approach risk after the events of that day? Clearly, the world needs to protect itself from the acts of terrorists and other disturbed individuals. However, it is also important to address the root causes of such antisocial movements. It is, therefore, suggested that programs addressed at combatting ignorance, prejudice, and social inequalities may be more effective premiums for reducing the risk of terrosrtism than has been recognized to date.Decision making, risk, insurance, terrorism, September 11

    The challenge of representative design in psychology and economics

    Get PDF
    The demands of representative design, as formulated by Egon Brunswik (1956), set a high methodological standard. Both experimental participants and the situations with which they are faced should be representative of the populations to which researchers claim to generalize results. Failure to observe the latter has led to notable experimental failures in psychology from which economics could learn. It also raises questions about the meaning of testing economic theories in “abstract” environments. Logically, abstract tests can only be generalized to “abstract realities” and these may or may not have anything to do with the “empirical realities” experienced by economic actors.Experiments, representative design, sampling, Leex

    Determinants of linear judgment: A meta-analysis of lens model studies

    Get PDF
    The mathematical representation of Brunswik’s lens model has been used extensively to study human judgment and provides a unique opportunity to conduct a meta-analysis of studies that covers roughly five decades. Specifically, we analyze statistics of the “lens model equation” (Tucker, 1964) associated with 259 different task environments obtained from 78 papers. In short, we find – on average – fairly high levels of judgmental achievement and note that people can achieve similar levels of cognitive performance in both noisy and predictable environments. Although overall performance varies little between laboratory and field studies, both differ in terms of components of performance and types of environments (numbers of cues and redundancy). An analysis of learning studies reveals that the most effective form of feedback is information about the task. We also analyze empirically when bootstrapping is more likely to occur. We conclude by indicating shortcomings of the kinds of studies conducted to date, limitations in the lens model methodology, and possibilities for future research.Judgment, lens model, linear models, learning, bootstrapping

    Take-the-best and other simple strategies: Why and when they work 'well' in binary choice

    Get PDF
    The effectiveness of decision rules depends on characteristics of both rules and environments. A theoretical analysis of environments specifies the relative predictive accuracies of the lexicographic rule 'take-the-best' (TTB) and other simple strategies for binary choice. We identify three factors: how the environment weights variables; characteristics of choice sets; and error. For cases involving from three to five binary cues, TTB is effective across many environments. However, hybrids of equal weights (EW) and TTB models are more effective as environments become more compensatory. In the presence of error, TTB and similar models do not predict much better than a naïve model that exploits dominance. We emphasize psychological implications and the need for more complete theories of the environment that include the role of error.Decision making, bounded rationality, lexicographic rules, Leex

    Experiencing simulated outcomes

    Get PDF
    Whereas much literature has documented difficulties in making probabilistic inferences, it has also emphasized the importance of task characteristics in determining judgmental accuracy. Noting that people exhibit remarkable efficiency in encoding frequency information sequentially, we construct tasks that exploit this ability by requiring people to experience the outcomes of sequentially simulated data. We report two experiments. The first involved seven well-known probabilistic inference tasks. Participants differed in statistical sophistication and answered with and without experience obtained through sequentially simulated outcomes in a design that permitted both between- and within-subject analyses. The second experiment involved interpreting the outcomes of a regression analysis when making inferences for investment decisions. In both experiments, even the statistically naïve make accurate probabilistic inferences after experiencing sequentially simulated outcomes and many prefer this presentation format. We conclude by discussing theoretical and practical implications.probabilistic reasoning; natural frequencies; experiential sampling; simulation., leex

    Econometrics and decision making: Effects of presentation mode

    Get PDF
    Much of empirical economics involves regression analysis. However, does the presentation of results affect economists’ ability to make inferences for decision making purposes? In a survey, 257 academic economists were asked to make probabilistic inferences on the basis of the outputs of a regression analysis presented in a standard format. Questions concerned the distribution of the dependent variable conditional on known values of the independent variable. However, many respondents underestimated uncertainty by failing to take into account the standard deviation of the estimated residuals. The addition of graphs did not substantially improve inferences. On the other hand, when only graphs were provided (i.e., with no statistics), respondents were substantially more accurate. We discuss implications for improving practice in reporting results of regression analyses.Regression analysis; presentation formats; probabilistic predictions; graphs., leex

    Satisfaction in choice as a function of the number of alternatives: When "goods satiate" but "bads escalate"

    Get PDF
    Whereas people are typically thought to be better off with more choices, studies show that they often prefer to choose from small as opposed to large sets of alternatives. We propose that satisfaction from choice is an inverted U-shaped function of the number of alternatives. This proposition is derived theoretically by considering the benefits and costs of different numbers of alternatives and is supported by four experimental studies. We also manipulate the perceptual costs of information processing and demonstrate how this affects the resulting “satisfaction function.” We further indicate that satisfaction when choosing from a given set is diminished if people are made aware of the existence of other choice sets. The role of individual differences in satisfaction from choice is documented by noting effects due to gender and culture. We conclude by emphasizing the need to have an explicit rationale for knowing how much choice is “enough.”Consumer choice, perception of variety, tyranny of choice, visual perception, cultural differences, Leex

    Regions of rationality: Maps for bounded agents

    Get PDF
    An important problem in descriptive and prescriptive research in decision making is to identify “regions of rationality,” i.e., the areas for which heuristics are and are not effective. To map the contours of such regions, we derive probabilities that heuristics identify the best of m alternatives (m > 2) characterized by k attributes or cues (k > 1). The heuristics include a single variable (lexicographic), variations of elimination-by-aspects, equal weighting, hybrids of the preceding, and models exploiting dominance. We use twenty simulated and four empirical datasets for illustration. We further provide an overview by regressing heuristic performance on factors characterizing environments. Overall, “sensible” heuristics generally yield similar choices in many environments. However, selection of the appropriate heuristic can be important in some regions (e.g., if there is low inter-correlation among attributes/cues). Since our work assumes a “hit or miss” decision criterion, we conclude by outlining extensions for exploring the effects of different loss functions.Decision making, Bounded rationality, Lexicographic rules, Choice theory, Leex

    On heuristic and linear models of judgment: Mapping the demand for knowledge

    Get PDF
    Research on judgment and decision making presents a confusing picture of human abilities. For example, much research has emphasized the dysfunctional aspects of judgmental heuristics, and yet, other findings suggest that these can be highly effective. A further line of research has modeled judgment as resulting from “as if” linear models. This paper illuminates the distinctions in these approaches by providing a common analytical framework based on the central theoretical premise that understanding human performance requires specifying how characteristics of the decision rules people use interact with the demands of the tasks they face. Our work synthesizes the analytical tools of “lens model” research with novel methodology developed to specify the effectiveness of heuristics in different environments and allows direct comparisons between the different approaches. We illustrate with both theoretical analyses and simulations. We further link our results to the empirical literature by a meta-analysis of lens model studies and estimate both human and heuristic performance in the same tasks. Our results highlight the trade-off between linear models and heuristics. Whereas the former are cognitively demanding, the latter are simple to use. However, they require knowledge – and thus “maps” – of when and which heuristic to employ.Decision making; heuristics; linear models; lens model; judgmental biases

    Entrepreneurial success and failure: Confidence and fallible judgement

    Get PDF
    Excess entry – or the high failure rate of market-entry decisions – is often attributed to overconfidence exhibited by entreprene urs. We show analytically that whereas excess entry is an inevitable consequence of imperfect assessments of entrepreneurial skill, it does not imply overconfidence. Judgmental fallibility leads to excess entry even when everyone is underconfident. Self-selection implies greater confidence (but not necessarily overconfidence) among those who start new businesses than those who do not and among successful entrants than failures. Our results question claims that “entrepreneurs are overconfident” and emphasize the need to understand the role of judgmental fallibility in producing economic outcomes.Excess entry, fallible judgment, overconfidence, skill uncertainty, entrepreneurship, LeeX
    corecore