131 research outputs found

    The Impossibility of Quine’s Indeterminacy Theory

    Get PDF

    Autonomous Weapons and the Nature of Law and Morality: How Rule-of-Law-Values Require Automation of the Rule of Law

    Get PDF
    While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as much as possible, including the elements of law and morality that are operated by combatants in war. I suggest that, ethically speaking, deploying a legally competent robot in some legally regulated realm is not much different from deploying a more or less well-armed, vulnerable, obedient, or morally discerning soldier or general into battle, a police officer onto patrol, or a lawyer or judge into a trial. All feature automaticity in the sense of deputation to an agent we do not then directly control. Such relations are well understood and well-regulated in morality and law; so there is not much challenging philosophically in having robots be some of these agents — excepting the implications of the limits of robot technology at a given time for responsible deputation. I then consider this proposal in light of the differences between two conceptions of law. These are distinguished by whether each conception sees law as unambiguous rules inherently uncontroversial in each application; and I consider the prospects for robotizing law on each. Likewise for the prospects of robotizing moral theorizing and moral decision-making. Finally I identify certain elements of law and morality, noted by the philosopher Immanuel Kant, which robots can participate in only upon being able to set ends and emotionally invest in their attainment. One conclusion is that while affectless autonomous devices might be fit to rule us, they would not be fit to vote with us. For voting is a process for summing felt preferences, and affectless devices would have none to weigh into the sum. Since they don't care which outcomes obtain, they don't get to vote on which ones to bring about

    Retaliation Rationalized: Gauthier's Solution to the Deterrence Dilemma

    Get PDF
    Gauthier claims: (1) a non-maximizing action is rational if it maximized to intend it. If one intended to retaliate in order to deter an attack, (2) retaliation is rational, for it maximized to intend it. I argue that even on sympathetic theories of intentions, actions and choices, (1) is incoherent. But I defend (2) by arguing that an action is rational if it maximizes on preferences it maximized to adopt given one's antecedent preferences. (2) is true because it maximized to adopt preferences on which it maximizes to retaliate. I thus save the theory that rational actions must maximize, and extend it into the rational criticism of preferences

    Preference-Revision and the Paradoxes of Instrumental Rationality

    Get PDF
    To the normal reasons that we think can justify one in preferring something, x (namely, that x has objectively preferable properties, or has properties that one prefers things to have, or that x's obtaining would advance one's preferences), I argue that it can be a justifying reason to prefer x that one's very preferring of x would advance one's preferences. Here, one prefers x not because of the properties of x, but because of the properties of one's having the preference for x. So-revising one's preferences is rational in paradoxical choice situations like Kavka's Deterrence Paradox. I then try to meet the following objections: that this is stoicist, incoherent, bad faith; that it conflates instrumental and intrinsic value, gives wrong solutions to the problems presented by paradoxical choice situations, entails vicious regresses of value justification, falsifies value realism, makes valuing x unresponsive to x's properties, causes value conflict, conflicts with other standards of rationality, violates decision theory, counsels immorality, makes moral paradox, treats value change as voluntary, conflates first- and second-order values, is psychologically unrealistic, and wrongly presumes that paradoxical choice situations can even occur

    Ideal Moral Codes

    Get PDF
    Ideal rule utilitarianism says that a moral code C is correct if its acceptance maximizes utility; and that right action is compliance with C. But what if we cannot accept C? Rawls and L. Whitt suggest that C is correct if accepting C maximizes among codes we can accept; and that right action is compliance with C. But what if merely reinforcing a code we can't accept would maximize? G. Trianosky suggests that C is correct if reinforcing it maximizes; and that right action is action that has the effect of reinforcing compliance with C. I object to this and argue that C is correct if both accepting and reinforcing C would maximize and if C is reinforcible; and that right action consists in coming as close as possible to perfect acceptance of and compliance with C

    4. The Mutual Limitation of Needs as Bases of Moral Entitlements: A Solution to Braybrooke’s Problem

    Get PDF
    David Braybrooke argues that meeting people’s needs ought to be the primary goal of social policy. But he then faces the problem of how to deal with the fact that our most pressing needs, needs to be kept alive with resource-draining medical technology, threaten to exhaust our resources for meeting all other needs. I consider several solutions to this problem, eventually suggesting that the need to be kept alive is no different in kind from needs to fulfill various projects, and that needs may have a structure similar to rights, with people’s legitimate needs serving as constraints on each other’s entitlements to resources. This affords a set of axioms constraining possible needs. Further, if, as Braybrooke thinks, needs are created by communities approving projects, so that the means to prosecute the projects then come to count as needs, then communities are obliged to approve only projects that are co-feasible given the world’s finite resources. The result is that it can be legitimate not to funnel resources towards endless life-prolongation projects

    Preference's Progress: Rational Self-Alteration and the Rationality of Morality

    Get PDF
    I argue that Gauthier's constrained-maximizer rationality is problematic. But standard Maximizing Rationality means one's preferences are only rational if it would not maximize on them to adopt new ones. In the Prisoner's Dilemma, it maximizes to adopt conditionally cooperative preferences. (These are detailed, with a view to avoiding problems of circularity of definition.) Morality then maximizes. I distinguish the roles played in rational choices and their bases by preferences, dispositions, moral and rational principles, the aim of rational action, and rational decision rules. I argue that Maximizing Rationality necessarily structures conclusive reasons for action. Thus conations of any sort can base rational choices only if the conations are structured like a coherent preference function; rational actions maximize on such functions. Maximization-constraining dispositions cannot integrate into a coherent preference function

    Libertarian Agency and Rational Morality: Action-Theoretic Objections to Gauthier's Dispositional Soution of the Compliance Problem

    Get PDF
    David Gauthier thinks agents facing a prisoner's dilemma ('pd') should find it rational to dispose themselves to co-operate with those inclined to reciprocate (i.e., to acquire a constrained maximizer--'cm'--disposition), and to co-operate with other 'cmers'. Richmond Campbell argues that since dominance reasoning shows it remains to the agent's advantage to defect, his co-operation is only rational if cm "determines" him to co-operate, forcing him not to cheat. I argue that if cm "forces" the agent to co-operate, he is not acting at all, never mind rationally. Thus, neither author has shown that co-operation is rational action in a pd
    • …
    corecore