8,970 research outputs found

    Regulating Child Sex Robots: Restriction or Experimentation?

    Get PDF
    In July 2014, the roboticist Ronald Arkin suggested that child sex robots could be used to treat those with paedophilic predilections in the same way that methadone is used to treat heroin addicts. Taking this onboard, it would seem that there is reason to experiment with the regulation of this technology. But most people seem to disagree with this idea, with legal authorities in both the UK and US taking steps to outlaw such devices. In this paper, I subject these different regulatory attitudes to critical scrutiny. In doing so, I make three main contributions to the debate. First, I present a framework for thinking about the regulatory options that we confront when dealing with child sex robots. Second, I argue that there is a prima facie case for restrictive regulation, but that this is contingent on whether Arkin’s hypothesis has a reasonable prospect of being successfully tested. Third, I argue that Arkin’s hypothesis probably does not have a reasonable prospect of being successfully tested. Consequently, we should proceed with utmost caution when it comes to this technology

    Robot rights? Towards a social-relational justification of moral consideration \ud

    Get PDF
    Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration to some intelligent social robots: it sketches a novel argument for moral consideration based on social relations. It is shown that to further develop this argument we need to revise our existing ontological and social-political frameworks. It is suggested that we need a social ecology, which may be developed by engaging with Western ecology and Eastern worldviews. Although this relational turn raises many difficult issues and requires more work, this paper provides a rough outline of an alternative approach to moral consideration that can assist us in shaping our relations to intelligent robots and, by extension, to all artificial and biological entities that appear to us as more than instruments for our human purpose

    Can We Agree on What Robots Should be Allowed to Do? An Exercise in Rule Selection for Ethical Care Robots

    Get PDF
    Future Care Robots (CRs) should be able to balance a patient’s, often conflicting, rights without ongoing supervision. Many of the trade-offs faced by such a robot will require a degree of moral judgment. Some progress has been made on methods to guarantee robots comply with a predefined set of ethical rules. In contrast, methods for selecting these rules are lacking. Approaches departing from existing philosophical frameworks, often do not result in implementable robotic control rules. Machine learning approaches are sensitive to biases in the training data and suffer from opacity. Here, we propose an alternative, empirical, survey-based approach to rule selection. We suggest this approach has several advantages, including transparency and legitimacy. The major challenge for this approach, however, is that a workable solution, or social compromise, has to be found: it must be possible to obtain a consistent and agreed-upon set of rules to govern robotic behavior. In this article, we present an exercise in rule selection for a hypothetical CR to assess the feasibility of our approach. We assume the role of robot developers using a survey to evaluate which robot behavior potential users deem appropriate in a practically relevant setting, i.e., patient non-compliance. We evaluate whether it is possible to find such behaviors through a consensus. Assessing a set of potential robot behaviors, we surveyed the acceptability of robot actions that potentially violate a patient’s autonomy or privacy. Our data support the empirical approach as a promising and cost-effective way to query ethical intuitions, allowing us to select behavior for the hypothetical CR

    Ethical Reductionism

    Get PDF
    Ethical reductionism is the best version of naturalistic moral realism. Reductionists regard moral properties as identical to properties appearing in successful scientific theories. Nonreductionists, including many of the Cornell Realists, argue that moral properties instead supervene on scientific properties without identity. I respond to two arguments for nonreductionism. First, nonreductionists argue that the multiple realizability of moral properties defeats reductionism. Multiple realizability can be addressed in ethics by identifying moral properties uniquely or disjunctively with properties of the special sciences. Second, nonreductionists argue that irreducible moral properties explain empirical phenomena, just as irreducible special-science properties do. But since irreducible moral properties don't successfully explain additional regularities, they run the risk of being pseudoscientific properties. Reductionism has all the benefits of nonreductionism, while also being more secure against anti-realist objections because of its ontological simplicity

    Ethics of Artificial Intelligence

    Get PDF
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn

    Making metaethics work for AI: realism and anti-realism

    Get PDF
    Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place

    A Case for Machine Ethics in Modeling Human-Level Intelligent Agents

    Get PDF
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts

    Designing Robots for Care: Care Centered Value-Sensitive Design

    Get PDF
    The prospective robots in healthcare intended to be included within the conclave of the nurse-patient relationship—what I refer to as care robots—require rigorous ethical reflection to ensure their design and introduction do not impede the promotion of values and the dignity of patients at such a vulnerable and sensitive time in their lives. The ethical evaluation of care robots requires insight into the values at stake in the healthcare tradition. What’s more, given the stage of their development and lack of standards provided by the International Organization for Standardization to guide their development, ethics ought to be included into the design process of such robots. The manner in which this may be accomplished, as presented here, uses the blueprint of the Value-sensitive design approach as a means for creating a framework tailored to care contexts. Using care values as the foundational values to be integrated into a technology and using the elements in care, from the care ethics perspective, as the normative criteria, the resulting approach may be referred to as care centered value-sensitive design. The framework proposed here allows for the ethical evaluation of care robots both retrospectively and prospectively. By evaluating care robots in this way, we may ultimately ask what kind of care we, as a society, want to provide in the futur

    Imaginative Value Sensitive Design: How Moral Imagination Exceeds Moral Law Theories in Informing Responsible Innovation

    Get PDF
    Safe-by-Design (SBD) frameworks for the development of emerging technologies have become an ever more popular means by which scholars argue that transformative emerging technologies can safely incorporate human values. One such popular SBD methodology is called Value Sensitive Design (VSD). A central tenet of this design methodology is to investigate stakeholder values and design those values into technologies during early stage research and development (R&D). To accomplish this, the VSD framework mandates that designers consult the philosophical and ethical literature to best determine how to weigh moral trade-offs. However, the VSD framework also concedes the universalism of moral values, particularly the values of freedom, autonomy, equality trust and privacy justice. This paper argues that the VSD methodology, particularly applied to nano-bio-info-cogno (NBIC) technologies, has an insufficient grounding for the determination of moral values. As such, an exploration of the value-investigations of VSD are deconstructed to illustrate both its strengths and weaknesses. This paper also provides possible modalities for the strengthening of the VSD methodology, particularly through the application of moral imagination and how moral imagination exceed the boundaries of moral intuitions in the development of novel technologies
    corecore