1,641 research outputs found

    Responsible Autonomy

    Full text link
    As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence

    A New Treaty for Fully Autonomous Weapons: A Need or a Want?

    Get PDF
    Autonomous Weapon System (AWS) is still discussed and is considered to the principle of International Humanitarian Law (IHL) particular the principle of distinction and proportionality. In line with moral and ethical issues, some experts and global citizens agree that AWS will likely to distract moral and ethical on a battlefield and are never able to replace human’s feeling. Human beings are responsible over AWS because there is no such a fully autonomous weapons exist. It is always a human commander behind the actions. To bridge the situation on discussion of AWS, a new treaty should be created in order to anticipate further violation

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    Gnirut: The Trouble With Being Born Human In An Autonomous World

    Get PDF
    What if we delegated so much to autonomous AI and intelligent machines that They passed a law that forbids humans to carry out a number of professions? We conceive the plot of a new episode of Black Mirror to reflect on what might await us and how we can deal with such a future.Comment: 5 pages, 0 figures, Accepted at the "Re-Coding Black Mirror" workshop of the International World Wide Web Conferences (WWW

    Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control

    Full text link
    Designing a safe, trusted, and ethical AI may be practically impossible; however, designing AI with safe, trusted, and ethical use in mind is possible and necessary in safety and mission-critical domains like aerospace. Safe, trusted, and ethical use of AI are often used interchangeably; however, a system can be safely used but not trusted or ethical, have a trusted use that is not safe or ethical, and have an ethical use that is not safe or trusted. This manuscript serves as a primer to illuminate the nuanced differences between these concepts, with a specific focus on applications of Human-AI teaming in aerospace system control, where humans may be in, on, or out-of-the-loop of decision-making

    Adopting AI: How Familiarity Breeds Both Trust and Contempt

    Full text link
    Despite pronouncements about the inevitable diffusion of artificial intelligence and autonomous technologies, in practice it is human behavior, not technology in a vacuum, that dictates how technology seeps into -- and changes -- societies. In order to better understand how human preferences shape technological adoption and the spread of AI-enabled autonomous technologies, we look at representative adult samples of US public opinion in 2018 and 2020 on the use of four types of autonomous technologies: vehicles, surgery, weapons, and cyber defense. By focusing on these four diverse uses of AI-enabled autonomy that span transportation, medicine, and national security, we exploit the inherent variation between these AI-enabled autonomous use cases. We find that those with familiarity and expertise with AI and similar technologies were more likely to support all of the autonomous applications we tested (except weapons) than those with a limited understanding of the technology. Individuals that had already delegated the act of driving by using ride-share apps were also more positive about autonomous vehicles. However, familiarity cut both ways; individuals are also less likely to support AI-enabled technologies when applied directly to their life, especially if technology automates tasks they are already familiar with operating. Finally, opposition to AI-enabled military applications has slightly increased over time

    Machine learning, artificial intelligence, and the use of force by states

    Get PDF
    Machine learning algorithms have begun to play a critical role in modern society. Governments inevitably will employ machine learning to inform their decisions about whether and how to resort to force internationally. This essay identifies scenarios in which states likely will employ machine learning algorithms to guide their decisions about using force, analyzes legal challenges that will arise from the use of force-related algorithms, and recommends prophylactic measures for states as they begin to employ these tools
    • …
    corecore