112 research outputs found

    Ethically Aligned Design: An empirical evaluation of the RESOLVEDD-strategy in Software and Systems development context

    Full text link
    Use of artificial intelligence (AI) in human contexts calls for ethical considerations for the design and development of AI-based systems. However, little knowledge currently exists on how to provide useful and tangible tools that could help software developers and designers implement ethical considerations into practice. In this paper, we empirically evaluate a method that enables ethically aligned design in a decision-making process. Though this method, titled the RESOLVEDD-strategy, originates from the field of business ethics, it is being applied in other fields as well. We tested the RESOLVEDD-strategy in a multiple case study of five student projects where the use of ethical tools was given as one of the design requirements. A key finding from the study indicates that simply the presence of an ethical tool has an effect on ethical consideration, creating more responsibility even in instances where the use of the tool is not intrinsically motivated.Comment: This is the author's version of the work. The copyright holder's version can be found at https://doi.org/10.1109/SEAA.2019.0001

    The viability of 'embedded Ethics' in robotic military systems without humans in the decision loop

    Get PDF
    Presentation at the "Zagreb Applied Ethics Conference 2017: The Ethics of Robotics and Artificial Intelligence". Matica hrvatska - ZAGREB, CROATIA. 5-7 JUNE, 2017.The social regulation of robotic systems with some elements of inbuilt artificial intelligence, and capable of interacting with the physical world without human control, poses challenges of extraordinary complexity. In particular, when their characteristics make them suitable for being used in military operations as autonomous devices under specific conditions. My purpose is to do a case-study research about the viability of some elements of "embedded Ethics" in different devices, with built-in sensors and a variable range of functionality, starting with Autonomous Weapons Systems (AWS). Based on the revision of recent literature and prototypes, the expected results should give a clearer perspective about the viability of 'embedded Ethics' instructions in the programming of intelligent robotic systems, including those intended for military use. As a preliminary conclusion, the heterogeneity of designs, lethal capacity and degrees of functional complexity in operational contexts –highly unpredictable-, reinforces the importance of preserving human intervention in the decision loop, when the lapse for the sequence of decisions makes it possible. [Additional references available in: http://sl.ugr.es/zaec2017]La regulación social de los sistemas robóticos con elementos de inteligencia artificial incorporados, y capaces de interactuar con el mundo físico sin control humano, plantea desafíos de extraordinaria complejidad. En particular, cuando sus características los hacen aptos para ser utilizadas en operaciones militares como dispositivos autónomos bajo condiciones específicas. Mi propósito es realizar una investigación de casos relevantes para estudiar la viabilidad de algunos elementos de "ética embebida" en diferentes dispositivos, con sensores incorporados y rango variable de funcionalidad, comenzando con los sistemas de armas autónomas (AWS). Una revisión de la literatura reciente y de diversos prototipos en desarrollo podría ofrecer una perspectiva más clara sobre la viabilidad de instrucciones éticas incorporadas en la programación de sistemas robóticos inteligentes, incluidos los destinados al uso militar. Como conclusión preliminar, la heterogeneidad de los diseños, capacidad letal y grados de complejidad funcional en contextos operativos -muy impredecibles- refuerzan la importancia de preservar la intervención humana en el bucle de decisión, cuando el lapso de la secuencia de decisiones lo hace posible. [Additional references available in: http://sl.ugr.es/zaec2017]Supported by R+D Project [ref. FFI2016-79000-P]: "Artificial Intelligence and moral bio-enhancement. Ethical aspects" (IP: F.D. Lara). State Program for the Promotion of Scientific and Technical Research of Excellence, Subprogram of Knowledge Generation. Oct. 2016 - Sept. 2019

    Can We Agree on What Robots Should be Allowed to Do? An Exercise in Rule Selection for Ethical Care Robots

    Get PDF
    Future Care Robots (CRs) should be able to balance a patient’s, often conflicting, rights without ongoing supervision. Many of the trade-offs faced by such a robot will require a degree of moral judgment. Some progress has been made on methods to guarantee robots comply with a predefined set of ethical rules. In contrast, methods for selecting these rules are lacking. Approaches departing from existing philosophical frameworks, often do not result in implementable robotic control rules. Machine learning approaches are sensitive to biases in the training data and suffer from opacity. Here, we propose an alternative, empirical, survey-based approach to rule selection. We suggest this approach has several advantages, including transparency and legitimacy. The major challenge for this approach, however, is that a workable solution, or social compromise, has to be found: it must be possible to obtain a consistent and agreed-upon set of rules to govern robotic behavior. In this article, we present an exercise in rule selection for a hypothetical CR to assess the feasibility of our approach. We assume the role of robot developers using a survey to evaluate which robot behavior potential users deem appropriate in a practically relevant setting, i.e., patient non-compliance. We evaluate whether it is possible to find such behaviors through a consensus. Assessing a set of potential robot behaviors, we surveyed the acceptability of robot actions that potentially violate a patient’s autonomy or privacy. Our data support the empirical approach as a promising and cost-effective way to query ethical intuitions, allowing us to select behavior for the hypothetical CR

    Artificial morality: Making of the artificial moral agents

    Get PDF
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and court algorithms. It is therefore important to create reliable and responsible machines based on the same ethical principles that society demands from people. New challenges in creating such agents appear. There are philosophical questions about a machine’s potential to be an agent, or mora l agent, in the first place. Then comes the problem of social acceptance of such machines, regardless of their theoretic agency status. As a result of efforts to resolve this problem, there are insinuations of needed additional psychological (emotional and cogn itive) competence in cold moral machines. What makes this endeavour of developing AMAs even harder is the complexity of the technical, engineering aspect of their creation. Implementation approaches such as top- down, bottom-up and hybrid approach aim to find the best way of developing fully moral agents, but they encounter their own problems throughout this effort
    corecore