161,033 research outputs found

    Autonomia delle macchine e filosofia dell'intelligenza artificiale

    Get PDF
    Philosophical motives of interest for AI and robotic autonomous systems prominently stem from distinctive ethical concerns: in which circumstances are autonomous systems ought to be permitted or prohibited to perform tasks which have significant implications in the way of human responsibilities, moral duties or fundamental rights? Deontological and consequentialist approaches to ethical theorizing are brought to bear on these ethical issues in the context afforded by the case studies of autonomous vehicles and autonomous weapons. Local solutions to intertheoretic conflicts concerning these case studies are advanced towards the development of a more comprehensive ethical platform guiding the design and use of autonomous machinery

    Ethical Approaches and Autonomous Systems

    Get PDF

    Future Vision of Dynamic Certification Schemes for Autonomous Systems

    Full text link
    As software becomes increasingly pervasive in critical domains like autonomous driving, new challenges arise, necessitating rethinking of system engineering approaches. The gradual takeover of all critical driving functions by autonomous driving adds to the complexity of certifying these systems. Namely, certification procedures do not fully keep pace with the dynamism and unpredictability of future autonomous systems, and they may not fully guarantee compliance with the requirements imposed on these systems. In this paper, we have identified several issues with the current certification strategies that could pose serious safety risks. As an example, we highlight the inadequate reflection of software changes in constantly evolving systems and the lack of support for systems' cooperation necessary for managing coordinated movements. Other shortcomings include the narrow focus of awarded certification, neglecting aspects such as the ethical behavior of autonomous software systems. The contribution of this paper is threefold. First, we analyze the existing international standards used in certification processes in relation to the requirements derived from dynamic software ecosystems and autonomous systems themselves, and identify their shortcomings. Second, we outline six suggestions for rethinking certification to foster comprehensive solutions to the identified problems. Third, a conceptual Multi-Layer Trust Governance Framework is introduced to establish a robust governance structure for autonomous ecosystems and associated processes, including envisioned future certification schemes. The framework comprises three layers, which together support safe and ethical operation of autonomous systems

    Incorporating human dimension in autonomous decision-making on moral and ethical issues

    Get PDF
    As autonomous systems are becoming more and more pervasive, they often have to make decisions concerning moral and ethical values. There are many approaches to incorporating moral values in autonomous decision-making that are based on some sort of logical deduction. However, we argue here, in order for decision-making to seem persuasive to humans, it needs to reflect human values and judgments. Employing some insights from our ongoing research using features of the blackboard architecture for a context-aware recommender system, and a legal decision-making system that incorporates supra-legal aspects, we aim to exploreif this architecture can also be adapted to implement a moral decision-making system that generates rationales that are persuasive to humans. Our vision is that such a system can be used as an advisory system to consider a situation from different moral pers pectives, and generate ethical pros and cons of taking a particular course of action in a given context

    Artificial morality: Making of the artificial moral agents

    Get PDF
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and court algorithms. It is therefore important to create reliable and responsible machines based on the same ethical principles that society demands from people. New challenges in creating such agents appear. There are philosophical questions about a machine’s potential to be an agent, or mora l agent, in the first place. Then comes the problem of social acceptance of such machines, regardless of their theoretic agency status. As a result of efforts to resolve this problem, there are insinuations of needed additional psychological (emotional and cogn itive) competence in cold moral machines. What makes this endeavour of developing AMAs even harder is the complexity of the technical, engineering aspect of their creation. Implementation approaches such as top- down, bottom-up and hybrid approach aim to find the best way of developing fully moral agents, but they encounter their own problems throughout this effort

    Responsible Autonomy

    Full text link
    As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence

    A Case for Machine Ethics in Modeling Human-Level Intelligent Agents

    Get PDF
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts
    • …
    corecore