129,616 research outputs found

    Philosophical Signposts for Artificial Moral Agent Frameworks

    Get PDF
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency. At the very least, the said philosophical concepts may be treated as signposts for further research on how to truly account for the nature of Artificial Moral Agents

    Different discussions on roboethics and information ethics based on different contexts (BA). Discussions on robots, informatics and life in the information era in Japanese\ud bulletin board forums and mass media

    Get PDF
    In this paper, I will analyze „what sort of invisible reasons lie behind differences of discussions on roboethics and IE (Information Ethics) in Japan and “Western” cultures‟, focusing on (1) the recent trends of researches in roboethics in „Western‟ cultures, (2) the tendencies of portrayal of robots, ICTs, Informatics, life in the information era reflected in news papers reports and talks on BBSs in Japan. As we will see in this paper, Japanese people have difficulty in understanding some of the key concepts used in the fields of roboethics and IE (Information Ethics) such as „autonomy‟ or „responsibility (of robots)‟,etc. This difficulty appears to derive from different types of discussions based on of different cultural contexts (Ba) in which the majority of people in each culture are provided with a certain sort of shared/ normalized frames of narratives. In my view and according to some Japanese critics or authors, senses of „reality‟ of Japanese people are strongly related with "emotional sensitivity to things/persons/events in life" or "direct-non>mediated-intuitive\ud awareness/knowing" (Izutsu, 2001). These tendencies in Japanese minds seem to influence their limited interest in the "abstract" discussions as well\ud as in straightforward emotional expressions with regard to robots and ICTs

    A Case for Machine Ethics in Modeling Human-Level Intelligent Agents

    Get PDF
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts

    Autonomous Weapons and the Nature of Law and Morality: How Rule-of-Law-Values Require Automation of the Rule of Law

    Get PDF
    While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as much as possible, including the elements of law and morality that are operated by combatants in war. I suggest that, ethically speaking, deploying a legally competent robot in some legally regulated realm is not much different from deploying a more or less well-armed, vulnerable, obedient, or morally discerning soldier or general into battle, a police officer onto patrol, or a lawyer or judge into a trial. All feature automaticity in the sense of deputation to an agent we do not then directly control. Such relations are well understood and well-regulated in morality and law; so there is not much challenging philosophically in having robots be some of these agents — excepting the implications of the limits of robot technology at a given time for responsible deputation. I then consider this proposal in light of the differences between two conceptions of law. These are distinguished by whether each conception sees law as unambiguous rules inherently uncontroversial in each application; and I consider the prospects for robotizing law on each. Likewise for the prospects of robotizing moral theorizing and moral decision-making. Finally I identify certain elements of law and morality, noted by the philosopher Immanuel Kant, which robots can participate in only upon being able to set ends and emotionally invest in their attainment. One conclusion is that while affectless autonomous devices might be fit to rule us, they would not be fit to vote with us. For voting is a process for summing felt preferences, and affectless devices would have none to weigh into the sum. Since they don't care which outcomes obtain, they don't get to vote on which ones to bring about

    Sexual Rights, Disability and Sex Robots

    Get PDF
    I argue that the right to sexual satisfaction of severely physically and mentally disabled people and elderly people who suffer from neurodegenerative diseases can be fulfilled by deploying sex robots; this would enable us to satisfy the sexual needs of many who cannot provide for their own sexual satisfaction; without at the same time violating anybody’s right to sexual self-determination. I don’t offer a full-blown moral justification of deploying sex robots in such cases, as not all morally relevant concerns can be addressed here; rather, I put forward a plausible way of fulfilling acute sexual needs without thereby violating anybody’s sexual rights

    Moral Courage in Organizations

    Get PDF
    {Excerpt} Moral courage is the strength to use ethical principles to do what one believes is right even though the result may not be to everyone’s liking or could occasion personal loss. In organizations, some of the hardest decisions have ethical stakes: it is everyday moral courage that sets an organization and its members apart. Courage is the ability to confront danger, fear, intimidation, pain, or uncertainty. Physical courage is fortitude in the face of death (and its threat), hardship, or physical pain. Moral courage, the form the attribute nowadays refers to, is put simply the ability to act rightly in the face of discouragement or opposition,possibly and knowingly running the risk of adverse personal consequences. Springing from ethics—notably integrity, responsibility, compassion, and forgiveness—it is thequality of mind or spirit that enables a person to withstand danger, difficulty, or fear; persevere; and venture. Comprehensively—as said by Christopher Rate et al., it is awillful, intentional act, executed after mindful deliberation, involving objective substantial risk to the bearer, and primarily motivated to bring about a noble good or worthy enddespite, perhaps, the presence of the emotion of fear

    A Value-Sensitive Design Approach to Intelligent Agents

    Get PDF
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently provides a framework that has the potential to embed stakeholder values and incorporate current design methods. The reader should begin to take away the importance of a proactive design approach to intelligent agents
    • …
    corecore