4,496 research outputs found

    Autonomous Weapons and the Nature of Law and Morality: How Rule-of-Law-Values Require Automation of the Rule of Law

    Get PDF
    While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as much as possible, including the elements of law and morality that are operated by combatants in war. I suggest that, ethically speaking, deploying a legally competent robot in some legally regulated realm is not much different from deploying a more or less well-armed, vulnerable, obedient, or morally discerning soldier or general into battle, a police officer onto patrol, or a lawyer or judge into a trial. All feature automaticity in the sense of deputation to an agent we do not then directly control. Such relations are well understood and well-regulated in morality and law; so there is not much challenging philosophically in having robots be some of these agents — excepting the implications of the limits of robot technology at a given time for responsible deputation. I then consider this proposal in light of the differences between two conceptions of law. These are distinguished by whether each conception sees law as unambiguous rules inherently uncontroversial in each application; and I consider the prospects for robotizing law on each. Likewise for the prospects of robotizing moral theorizing and moral decision-making. Finally I identify certain elements of law and morality, noted by the philosopher Immanuel Kant, which robots can participate in only upon being able to set ends and emotionally invest in their attainment. One conclusion is that while affectless autonomous devices might be fit to rule us, they would not be fit to vote with us. For voting is a process for summing felt preferences, and affectless devices would have none to weigh into the sum. Since they don't care which outcomes obtain, they don't get to vote on which ones to bring about

    A Case for Machine Ethics in Modeling Human-Level Intelligent Agents

    Get PDF
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts

    Scientism on Steroids: A Review of Freedom Evolves by Daniel Dennett (2003) (review revised 2019)

    Get PDF
    ``People say again and again that philosophy doesn´t really progress, that we are still occupied with the same philosophical problems as were the Greeks. But the people who say this don´t understand why it has to be so. It is because our language has remained the same and keeps seducing us into asking the same questions. As long as there continues to be a verb ´to be´ that looks as if it functions in the same way as ´to eat and to drink´, as long as we still have the adjectives ´identical´, ´true´, ´false´, ´possible´, as long as we continue to talk of a river of time, of an expanse of space, etc., etc., people will keep stumbling over the same puzzling difficulties and find themselves staring at something which no explanation seems capable of clearing up. And what´s more, this satisfies a longing for the transcendent, because, insofar as people think they can see the ‘limits of human understanding´, they believe of course that they can see beyond these.`` This quote is from Ludwig Wittgenstein who redefined philosophy some 70 years ago (but most people have yet to find this out). Dennett, though he has been a philosopher for some 40 years, is one of them. It is also curious that both he and his prime antagonist, John Searle, studied under famous Wittgensteinians (Searle with John Austin, Dennett with Gilbert Ryle) but Searle more or less got the point and Dennett did not, (though it is stretching things to call Searle or Ryle Wittgensteinians). Dennett is a hard determinist (though he tries to sneak reality in the back door), and perhaps this is due to Ryle, whose famous book ´The Concept of Mind´(1949) continues to be reprinted. That book did a great job of exorcising the ghost, but it left the machine. Dennett enjoys making the mistakes Wittgenstein, Ryle (and many others since) have exposed in detail. Our use of the words consciousness, choice, freedom, intention, particle, thinking, determines, wave, cause, happened, event (and so on endlessly) are rarely a source of confusion, but as soon as we leave normal life and enter philosophy (and any discussion detached from the environment in which language evolved—i.e., the exact context in which the words had meaning) chaos reigns. Like most, Dennett lacks a coherent framework - which Searle has called the logical structure of rationality. I have expanded on this considerably since I wrote this review and my recent articles show in detail what is wrong with Dennett's approach to philosophy, which one might call Scientism on steroids. Let me end with another quote from Wittgenstein--´Ambition is the death of thought´. Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle’ 2nd ed (2019). Those interested in more of my writings may see ‘Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 3rd ed (2019), The Logical Structure of Human Behavior (2019), and Suicidal Utopian Delusions in the 21st Century 4th ed (2019

    From Biological to Synthetic Neurorobotics Approaches to Understanding the Structure Essential to Consciousness (Part 3)

    Get PDF
    This third paper locates the synthetic neurorobotics research reviewed in the second paper in terms of themes introduced in the first paper. It begins with biological non-reductionism as understood by Searle. It emphasizes the role of synthetic neurorobotics studies in accessing the dynamic structure essential to consciousness with a focus on system criticality and self, develops a distinction between simulated and formal consciousness based on this emphasis, reviews Tani and colleagues' work in light of this distinction, and ends by forecasting the increasing importance of synthetic neurorobotics studies for cognitive science and philosophy of mind going forward, finally in regards to most- and myth-consciousness

    Artificial intelligence: ChatGPT and human gullibility

    Get PDF
    Artificial intelligence (AI) has advanced rapidly in the past decade. The arrival of ChatGPT last year has pushed the debate about AI into the public sphere. ChatGPT, and similar tools, do things we once thought were outside the ability of computers. This raises questions for how we educate people about the capability and the limitations of such tools. This article provides an overview of artificial intelligence and explores what ChatGPT is capable of doing. It also raises questions about morality, responsibility, sentience, intelligence, and how humans’ propensity to anthropomorphise makes us gullible and thus ready to believe that this technology is delivering something that it cannot

    Moral psychology of sex robots : An experimental study − how pathogen disgust is associated with interhuman sex but not interandroid sex

    Get PDF
    The idea of sex with robots seems to fascinate the general public, raising both enthusiasm and revulsion. We ran two experimental studies (Ns = 172 and 260) where we compared people’s reactions to variants of stories about a person visiting a bordello. Our results show that paying for the services of a sex robot is condemned less harshly than paying for the services of a human sex worker, especially if the payer is married. We have for the first time experimentally confirmed that people are somewhat unsure about whether using a sex robot while in a committed monogamous relationship should be considered as infidelity. We also shed light on the psychological factors influencing attitudes toward sex robots, including disgust sensitivity and interest in science fiction. Our results indicate that sex with a robot is indeed genuinely considered as sex, and a sex robot is genuinely seen as a robot; thus, we show that standard research methods on sexuality and robotics are also applicable in research on sex robotics.Peer reviewe

    Moral Reasoning and Emotion

    Get PDF
    This chapter discusses contemporary scientific research on the role of reason and emotion in moral judgment. The literature suggests that moral judgment is influenced by both reasoning and emotion separately, but there is also emerging evidence of the interaction between the two. While there are clear implications for the rationalism-sentimentalism debate, we conclude that important questions remain open about how central emotion is to moral judgment. We also suggest ways in which moral philosophy is not only guided by empirical research but continues to guide it

    The Morality of Artificial Friends in Ishiguro’s Klara and the Sun

    Get PDF
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that are central to the novel: the (1) view “from within,”including the standard (or “metaphysical”) perspective on moral agency, and the (2) view “from outside,” which includes behaviorism, functionalism and the social-relational perspective. Importantly, while the story illustrates both views, it exposes the epistemological vulnerability of the first in relation to the practical and social reality imposed by the second. That is, regardless of what metaphysical properties the Artificial Friend Klara can be said to have (from within), her moral status as well as agency ultimately depend on the views of others (from outside), including the others’ own epistemic beliefs about the nature of consciousness and personhood

    Are We Ready for Artificial Ethics: A.I. and the Future of Ethical Decision Making

    Get PDF
    • …
    corecore