65,145 research outputs found

    The wisdom of the crowd playing The Price Is Right

    Get PDF
    In The Price Is Right game show, players compete to win a prize, by placing bids on its price. We ask whether it is possible to achieve a “wisdom of the crowd” effect, by combining the bids to produce an aggregate price estimate that is superior to the estimates of individual players. Using data from the game show, we show that a wisdom of the crowd effect is possible, especially by using models of the decision-making processes involved in bidding. The key insight is that, because of the competitive nature of the game, what people bid is not necessarily the same as what they know. This means better estimates are formed by aggregating latent knowledge than by aggregating observed bids. We use our results to highlight the usefulness of models of cognition and decision-making in studying the wisdom of the crowd, which are often approached only from non-psychological statistical perspectives

    A Voting-Based System for Ethical Decision Making

    Full text link
    We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.Comment: 25 pages; paper has been reorganized, related work and discussion sections have been expande

    Informants in Organizational Marketing Research

    Get PDF
    Organizational research frequently involves seeking judgmental data from multiple informants within organizations. Researchers are often faced with determining how many informants to survey, who those informants should be and (if more than one) how best to aggregate responses when disagreement exists between those responses. Using both recall and forecasting data from a laboratory study involving the MARKSTRAT simulation, we show that when there are multiple respondents who disagree, responses aggregated using confidence-based or competence-based weights outperform those with data-based weights, which in turn provide significant gains in estimation accuracy over simply averaging respondent reports. We then illustrate how these results can be used to determine the best number of respondents for a market research task as well as to provide an effective screening mechanism when seeking a single, best informant.screening;marketing research;aggregation;organizational research;survey research

    Looking With One Eye Closed: The Twilight of Administrative Law

    Get PDF
    n an article published recently in this Journal, Judge Loren Smith calls for a change in the focus of thinking and writing about administrative law. Attractive though his general themes are, in developing them Judge Smith passes much too quickly over two important points: the difficulty of arriving at political consensus, and the importance to political consensus of exactly those processes to which Smith objects

    Joint perceptual decision-making: a case study in explanatory pluralism.

    Get PDF
    Traditionally different approaches to the study of cognition have been viewed as competing explanatory frameworks. An alternative view, explanatory pluralism, regards different approaches to the study of cognition as complementary ways of studying the same phenomenon, at specific temporal and spatial scales, using appropriate methodological tools. Explanatory pluralism has been often described abstractly, but has rarely been applied to concrete cases. We present a case study of explanatory pluralism. We discuss three separate ways of studying the same phenomenon: a perceptual decision-making task (Bahrami et al., 2010), where pairs of subjects share information to jointly individuate an oddball stimulus among a set of distractors. Each approach analyzed the same corpus but targeted different units of analysis at different levels of description: decision-making at the behavioral level, confidence sharing at the linguistic level, and acoustic energy at the physical level. We discuss the utility of explanatory pluralism for describing this complex, multiscale phenomenon, show ways in which this case study sheds new light on the concept of pluralism, and highlight good practices to critically assess and complement approaches
    corecore