220,442 research outputs found

    Moral Uncertainty for Deontologists

    Get PDF
    Defenders of deontological constraints in normative ethics face a challenge: how should an agent decide what to do when she is uncertain whether some course of action would violate a constraint? The most common response to this challenge has been to defend a threshold principle on which it is subjectively permissible to act iff the agent's credence that her action would be constraint-violating is below some threshold t. But the threshold approach seems arbitrary and unmotivated: what would possibly determine where the threshold should be set, and why should there be any precise threshold at all? Threshold views also seem to violate ought agglomeration, since a pair of actions each of which is below the threshold for acceptable moral risk can, in combination, exceed that threshold. In this paper, I argue that stochastic dominance reasoning can vindicate and lend rigor to the threshold approach: given characteristically deontological assumptions about the moral value of acts, it turns out that morally safe options will stochastically dominate morally risky alternatives when and only when the likelihood that the risky option violates a moral constraint is greater than some precisely definable threshold (in the simplest case, .5). I also show how, in combination with the observation that deontological moral evaluation is relativized to particular choice situations, this approach can overcome the agglomeration problem. This allows the deontologist to give a precise and well-motivated response to the problem of uncertainty

    Decision theory for agents with incomplete preferences

    Get PDF
    Orthodox decision theory gives no advice to agents who hold two goods to be incommensurate in value because such agents will have incomplete preferences. According to standard treatments, rationality requires complete preferences, so such agents are irrational. Experience shows, however, that incomplete preferences are ubiquitous in ordinary life. In this paper, we aim to do two things: (1) show that there is a good case for revising decision theory so as to allow it to apply non-vacuously to agents with incomplete preferences, and (2) to identify one substantive criterion that any such non-standard decision theory must obey. Our criterion, Competitiveness, is a weaker version of a dominance principle. Despite its modesty, Competitiveness is incompatible with prospectism, a recently developed decision theory for agents with incomplete preferences. We spend the final part of the paper showing why Competitiveness should be retained, and prospectism rejected

    Neurobiological studies of risk assessment: A comparison of expected utility and mean-variance approaches

    Get PDF
    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed

    What does Newcomb's paradox teach us?

    Full text link
    In Newcomb's paradox you choose to receive either the contents of a particular closed box, or the contents of both that closed box and another one. Before you choose, a prediction algorithm deduces your choice, and fills the two boxes based on that deduction. Newcomb's paradox is that game theory appears to provide two conflicting recommendations for what choice you should make in this scenario. We analyze Newcomb's paradox using a recent extension of game theory in which the players set conditional probability distributions in a Bayes net. We show that the two game theory recommendations in Newcomb's scenario have different presumptions for what Bayes net relates your choice and the algorithm's prediction. We resolve the paradox by proving that these two Bayes nets are incompatible. We also show that the accuracy of the algorithm's prediction, the focus of much previous work, is irrelevant. In addition we show that Newcomb's scenario only provides a contradiction between game theory's expected utility and dominance principles if one is sloppy in specifying the underlying Bayes net. We also show that Newcomb's paradox is time-reversal invariant; both the paradox and its resolution are unchanged if the algorithm makes its `prediction' after you make your choice rather than before

    Chance, Credence and Circles

    Get PDF
    This is a discussion of Richard Pettigrew's book "Accuracy and the Laws of Credence". I target Pettigrew's application of the accuracy framework to derive chance-credence principles. My principal contention is that Pettigrew's preferred version of the argument might in one sense be circular and, moreover, that Pettigrew's premises have content that go beyond that of standard chance-credence principles

    Mean-Variance and Expected Utility: The Borch Paradox

    Get PDF
    The model of rational decision-making in most of economics and statistics is expected utility theory (EU) axiomatised by von Neumann and Morgenstern, Savage and others. This is less the case, however, in financial economics and mathematical finance, where investment decisions are commonly based on the methods of mean-variance (MV) introduced in the 1950s by Markowitz. Under the MV framework, each available investment opportunity ("asset") or portfolio is represented in just two dimensions by the ex ante mean and standard deviation (ÎŒ,σ)(\mu,\sigma) of the financial return anticipated from that investment. Utility adherents consider that in general MV methods are logically incoherent. Most famously, Norwegian insurance theorist Borch presented a proof suggesting that two-dimensional MV indifference curves cannot represent the preferences of a rational investor (he claimed that MV indifference curves "do not exist"). This is known as Borch's paradox and gave rise to an important but generally little-known philosophical literature relating MV to EU. We examine the main early contributions to this literature, focussing on Borch's logic and the arguments by which it has been set aside.Comment: Published in at http://dx.doi.org/10.1214/12-STS408 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Ranking Intersecting Lorenz Curves

    Get PDF
    This paper is concerned with the problem of ranking Lorenz curves in situations where the Lorenz curves intersect and no unambiguous ranking can be attained without introducing weaker ranking criteria than first-degree Lorenz dominance. To deal with such situations two alternative sequences of nested dominance criteria between Lorenz curves are introduced. At the limit the systems of dominance criteria appear to depend solely on the income share of either the worst-off or the best-off income recipient. This result suggests two alternative strategies for increasing the number of Lorenz curves that can be strictly ordered; one that places more emphasis on changes that occur in the lower part of the income distribution and the other that places more emphasis on changes that occur in the upper part of the income distribution. Both strategies turn out to depart from the Gini coefficient; one requires higher degree of downside and the other higher degree of upside inequality aversion than what is exhibited by the Gini coefficient. Furthermore, it is demonstrated that the sequences of dominance criteria characterize two separate systems of nested subfamilies of inequality measures and thus provide a method for identifying the least restrictive social preferences required to reach an unambiguous ranking of a given set of Lorenz curves. Moreover, it is demonstrated that the introduction of successively more general transfer principles than the Pigou-Dalton principle of transfers forms a helpful basis for judging the normative significance of higher degrees of Lorenz dominance. The dominance results for Lorenz curves do also apply to generalized Lorenz curves and thus provide convenient characterizations of the corresponding social welfare orderings.generalized Gini families of inequality measures, rank-dependent measures of inequality, Gini coefficient, partial orderings, Lorenz dominance, Lorenz curve, general principles of transfers

    A Voting-Based System for Ethical Decision Making

    Full text link
    We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.Comment: 25 pages; paper has been reorganized, related work and discussion sections have been expande

    Can All-Accuracy Accounts Justify Evidential Norms?

    Get PDF
    Some of the most interesting recent work in formal epistemology has focused on developing accuracy-based approaches to justifying Bayesian norms. These approaches are interesting not only because they offer new ways to justify these norms, but because they potentially offer a way to justify all of these norms by appeal to a single, attractive epistemic goal: having accurate beliefs. Recently, Easwaran & Fitelson (2012) have raised worries regarding whether such “all-accuracy” or “purely alethic” approaches can accommodate and justify evidential Bayesian norms. In response, proponents of purely alethic approaches, such as Pettigrew (2013b) and Joyce (2016), have argued that scoring rule arguments provide us with compatible and purely alethic justifications for the traditional Bayesian norms, including evidential norms. In this paper I raise several challenges to this claim. First, I argue that many of the justifications these scoring rule arguments provide are not compatible. Second, I raise worries for the claim that these scoring rule arguments provide purely alethic justifications. Third, I turn to assess the more general question of whether purely alethic justifications for evidential norms are even possible, and argue that, without making some contentious assumptions, they are not. Fourth, I raise some further worries for the possibility of providing purely alethic justifications for content-sensitive evidential norms, like the Principal Principle
    • 

    corecore