73 research outputs found

    Fast and frugal heuristics for portfolio decisions with positive project interactions

    Get PDF
    Funding: ID is supported in part by funding from the National Research Foundation of South Africa (Grant ID 90782, 105782).We consider portfolio decision problems with positive interactions between projects. Exact solutions to this problem require that all interactions are assessed, requiring time, expertise and effort that may not always be available. We develop and test a number of fast and frugal heuristics – psychologically plausible models that limit the number of assessments to be made and combine these in computationally simple ways – for portfolio decisions. The proposed “add-the-best” family of heuristics constructs a portfolio by iteratively adding a project that is best in a greedy sense, with various definitions of “best”. We present analytical results showing that information savings achievable by heuristics can be considerable; a simulation experiment showing that portfolios selected by heuristics can be close to optimal under certain conditions; and a behavioral laboratory experiment demonstrating that choices are often consistent with the use of heuristics. Add-the-best heuristics combine descriptive plausibility with effort-accuracy trade-offs that make them potentially attractive for prescriptive use.PostprintPeer reviewe

    Simple models in finance: A mathematical analysis of the probabilistic recognition heuristic

    Get PDF
    It is well known that laypersons and practitioners often resist using complex mathematical models such as those proposed by economics or finance, and instead use fast and frugal strategies to make decisions. We study one such strategy: the recognition heuristic. This states that people infer that an object they recognize has a higher value of a criterion of interest than an object they do not recognize. We extend previous studies by including a general model of the recognition heuristic that considers probabilistic recognition, and carry out a mathematical analysis. We derive general closed-form expressions for all the parameters of this general model and show the similarities and differences between our proposal and the original deterministic model. We provide a formula for the expected accuracy rate by making decisions according to this heuristic and analyze whether or not its prediction exceeds the expected accuracy rate of random inference. Finally, we discuss whether having less information could be convenient for making more accurate decisionsThis research has been partly supported by grants from the Agencia Nacional de Innovacion e Investigación (ANII), Urugua

    All policies are wrong, but some are useful—and which ones do no harm?

    Get PDF
    The five of us research and teach risk analysis with an eye towards decision support. Our work has been dedicated to taming risks and helping to make challenging decisions. But nothing had prepared us for the Covid-19 pandemic. We first had to grapple with the news coming from abroad, including, for some of us, our home countries. Then, some information and research, but mostly opinions, started coming in from our academic community, and we felt the tensions. Finally, the UK went into an unofficial and then official lockdown, and all University staff were asked to redirect their research capacity so as to support the national effort for risk analysis and decision support. As we write this on the 20th of April, many countries, including the UK, are starting to consider how to get out of lockdown. Like the previous stages of the pandemic, there is little data, perhaps a bit more research, surely many more opinions, and definitely an overwhelming amount of personal experiences and thoughts. Here we reflect on all of the above, just in case it can help the readers of this Minds in Society flash editorial to think and act, or at least, to not have to do so entirely on their own. As it can be expected, our collage introduces more questions than it can answer

    When Does Diversity Trump Ability (and Vice Versa) in Group Decision Making? A Simulation Study

    Get PDF
    It is often unclear which factor plays a more critical role in determining a group's performance: the diversity among members of the group or their individual abilities. In this study, we addressed this “diversity vs. ability” issue in a decision-making task. We conducted three simulation studies in which we manipulated agents' individual ability (or accuracy, in the context of our investigation) and group diversity by varying (1) the heuristics agents used to search task-relevant information (i.e., cues); (2) the size of their groups; (3) how much they had learned about a good cue search order; and (4) the magnitude of errors in the information they searched. In each study, we found that a manipulation reducing agents' individual accuracy simultaneously increased their group's diversity, leading to a conflict between the two. These conflicts enabled us to identify certain conditions under which diversity trumps individual accuracy, and vice versa. Specifically, we found that individual accuracy is more important in task environments in which cues differ greatly in the quality of their information, and diversity matters more when such differences are relatively small. Changing the size of a group and the amount of learning by an agent had a limited impact on this general effect of task environment. Furthermore, we found that a group achieves its highest accuracy when there is an intermediate amount of errors in the cue information, regardless of the environment and the heuristic used, an effect that we believe has not been previously reported and warrants further investigation

    Swarm Intelligence in Animal Groups: When Can a Collective Out-Perform an Expert?

    Get PDF
    An important potential advantage of group-living that has been mostly neglected by life scientists is that individuals in animal groups may cope more effectively with unfamiliar situations. Social interaction can provide a solution to a cognitive problem that is not available to single individuals via two potential mechanisms: (i) individuals can aggregate information, thus augmenting their ‘collective cognition’, or (ii) interaction with conspecifics can allow individuals to follow specific ‘leaders’, those experts with information particularly relevant to the decision at hand. However, a-priori, theory-based expectations about which of these decision rules should be preferred are lacking. Using a set of simple models, we present theoretical conditions (involving group size, and diversity of individual information) under which groups should aggregate information, or follow an expert, when faced with a binary choice. We found that, in single-shot decisions, experts are almost always more accurate than the collective across a range of conditions. However, for repeated decisions – where individuals are able to consider the success of previous decision outcomes – the collective's aggregated information is almost always superior. The results improve our understanding of how social animals may process information and make decisions when accuracy is a key component of individual fitness, and provide a solid theoretical framework for future experimental tests where group size, diversity of individual information, and the repeatability of decisions can be measured and manipulated

    (P, p) retraining policies

    No full text
    Skills that are practiced infrequently need to be retrained. A retraining policy is optimal if it minimizes the cost of keeping the probability that the skill is learned within two bounds. The (P, p) policy is to retrain only when the probability that the skill is learned has dropped just above the lower bound, so that this probability is brought up just below the upper bound. For minimum assumptions on the cost function, a set of two easy-to-check conditions involving the relearning and forgetting functions guarantees the optimality of the (P, p) policy. The conditions hold for power functions proposed in the psychology of learning and forgetting but not for exponential functions.</p

    Behavior with models: the role of psychological heuristics in operational research

    No full text
    Bounded rationality refers to problems where there is not adequate time or computational resources in order to obtain all information and find an optimal solution but where nevertheless a good solution must be identified; i.e. bounded rationality is the realistic kind of rationality that laypeople and experts need to exhibit in our life and work. This chapter presents one view of bounded rationality, which has a very strong behavioral component: it consists of prescriptive models of decision making, which have also been used to describe people’s actual behavior. The models include the few pieces of information that people use and also specify the simple ways in which people process this information. They are called psychological heuristics. This chapter provides the conceptual foundation of the psychological heuristics research program, along with a discussion of its relationship to soft and hard operational research (OR), as well as an introduction to models of psychological heuristics. It is emphasized that it should not be taken for granted that optimization models always perform better. The empirical evidence and theoretical analyses on the relative performance of psychological heuristics and optimization models is presented. A guide is provided for deciding which of the two approaches to use for which types of problems. Finally, the argument is made that psychological heuristics should be chosen for problems that are either easy or difficult, and more complex models of optimization should be used for problems in between.</p

    The less-is-more effect

    No full text
    In inductive inference, a strong prediction is the less-is-more effect: Less information can lead to more accuracy. For the task of inferring which one of two objects has a higher value on a numerical criterion, there exist necessary and sufficient conditions under which the effect is predicted, assuming that recognition memory is perfect. Based on a simple model of imperfect recognition memory, I derive a more general characterization of the less-is-more effect, which shows the important role of the probabilities of hits and false alarms for predicting the effect. From this characterization, it follows that the less-is-more effect can be predicted even if heuristics (enabled when little information is available) have relatively low accuracy; this result contradicts current explanations of the effect. A new effect, the below-chance less-is-more effect, is also predicted. Even though the less-is-more effect is predicted to occur frequently, its average magnitude is predicted to be small, as has been found empirically. Finally, I show that current empirical tests of less-is-more-effect predictions have methodological problems and propose a new method. I conclude by examining the assumptions of the imperfect-recognition-memory model used here and of other models in the literature, and by speculating about future research

    The less-is-more effect: Predictions and tests

    No full text
    In inductive inference, a strong prediction is the less-is-more effect: Less information can lead to more accuracy. For the task of inferring which one of two objects has a higher value on a numerical criterion, there exist necessary and sufficient conditions under which the effect is predicted, assuming that recognition memory is perfect. Based on a simple model of imperfect recognition memory, I derive a more general characterization of the less-is-more effect, which shows the important role of the probabilities of hits and false alarms for predicting the effect. From this characterization, it follows that the less-is-more effect can be predicted even if heuristics (enabled when little information is available) have relatively low accuracy; this result contradicts current explanations of the effect. A new effect, the below-chance less-ismore effect, is also predicted. Even though the less-is-more effect is predicted to occur frequently, its average magnitude is predicted to be small, as has been found empirically. Finally, I show that current empirical tests of less-is-more-effect predictions have methodological problems and propose a new method. I conclude by examining the assumptions of the imperfect-recognition-memory model used here and of other models in the literature, and by speculating about future research
    corecore