166,450 research outputs found

    A Case for Machine Ethics in Modeling Human-Level Intelligent Agents

    Get PDF
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts

    Forecasting Transitions in Digital Society: From Social Norms to AI Applications

    Get PDF
    The use of AI and digitalization in many areas of everyday life holds great potential but also introduces significant societal transitions. This paper takes a closer look at three exemplary areas of central social and psychological relevance that might serve as a basis for forecasting transitions in the digital society: (1) social norms in the context of digital systems; (2) surveillance and social scoring; and (3) artificial intelligence as a decision-making aid or decision-making authority. For each of these areas, we highlight current trends and developments and then present future scenarios that illustrate possible societal transitions, related questions to be answered, and how such predictions might inform responsible technology design

    GDPR-compliant AI-based automated decision-making in the world of work

    Get PDF
    Artificial Intelligence is spreading fast in our everyday life and the world of work is no exception. AI is increasingly shaping the employment context: such emerging areas are augmented and automated decision-making. As AI-based decision-making is fuelled by personal data, compliance with data protection frameworks is inevitable. Even though automated decision-making is already addressed by the European norms on data protection – especially the GDPR –, their application in the world of work raises specific questions. The paper examines, in the light of the ‘general’ data protection background, what specific data protection challenges are raised in the field of AI-based automated decision-making in the context of employment. As a result of the research, the paper provides a detailed overview on the European legal framework on the data protection aspects of AI-based automated decision-making in the employment context. It identifies the main challenges, such as the applicability of the existing legal framework to the current use-cases and the specific questions relating to the lawful bases in the world of work, and provides guidelines on how to address these challenges

    AI Suffrage: A four-country survey on the acceptance of an automated voting system

    Get PDF
    Governments have begun to employ technological systems that use massive amounts of data and artificial intelligence (AI) in the domains of law enforcement, public health, or social welfare. In some areas, shifts in public opinion increasingly favor technology-aided public decision-making. This development presents an opportunity to explore novel approaches to how technology could be used to reinvigorate democratic governance and how the public perceives such changes. The study therefore posits a hypothetical AI voting system that mediates political decision-making between citizens and the state. We conducted a four-country online survey (N=6043) in Greece, Singapore, Switzerland, and the US to find out what factors affect the public’s acceptance of such a system. The data show that Singaporeans are most likely and Greeks least likely to accept the system. Considerations of the technology’s utility have a large effect on acceptance rates across cultures whereas attitudes towards political norms and political performance have partial effects

    Taming the algorithm - The right not to be subject to an automated decision in the General Data Protection Regulation

    Full text link
    “Taming the Algorithm” by Paweł Kuch deals with the EU's latest data protection law that is special in various respects. In contrast to the other norms of the GDPR, the provision on automated individual decisions (Art. 22 GDPR) does not contain any general specifications for the processing of personal data but regulates a specific constellation of such processing. Art. 22 GDPR is based on the assumption that making decisions by machines and algorithms is problematic and must therefore be legally framed and the final decision left to a data subject. With the recent developments in artificial intelligence (AI), numerous fields opened up. The question of the legal understanding of automated individual decisions has thus recently gained importance

    The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms and Argumentation

    Full text link
    An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and is interacting with end users. All of these actors are stakeholders affected by the behavior of the autonomous system. We address the challenge of how the ethical views of such stakeholders can be integrated in the behavior of the autonomous system. We propose an ethical recommendation component, which we call Jiminy, that uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. Jiminy represents the ethical views of each stakeholder by using normative systems, and has three ways of resolving moral dilemmas involving the opinions of the stakeholders. First, Jiminy considers how the arguments of the stakeholders relate to one another, which may already resolve the dilemma. Secondly, Jiminy combines the normative systems of the stakeholders such that the combined expertise of the stakeholders may resolve the dilemma. Thirdly, and only if these two other methods have failed, Jiminy uses context-sensitive rules to decide which of the stakeholders take preference. At the abstract level, these three methods are characterized by the addition of arguments, the addition of attacks among arguments, and the removal of attacks among arguments. We show how Jiminy can be used not only for ethical reasoning and collaborative decision making, but also for providing explanations about ethical behavior

    Artificial morality: Making of the artificial moral agents

    Get PDF
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and court algorithms. It is therefore important to create reliable and responsible machines based on the same ethical principles that society demands from people. New challenges in creating such agents appear. There are philosophical questions about a machine’s potential to be an agent, or mora l agent, in the first place. Then comes the problem of social acceptance of such machines, regardless of their theoretic agency status. As a result of efforts to resolve this problem, there are insinuations of needed additional psychological (emotional and cogn itive) competence in cold moral machines. What makes this endeavour of developing AMAs even harder is the complexity of the technical, engineering aspect of their creation. Implementation approaches such as top- down, bottom-up and hybrid approach aim to find the best way of developing fully moral agents, but they encounter their own problems throughout this effort

    Responsible Autonomy

    Full text link
    As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence
    corecore