5,172 research outputs found
An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications
We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems
Developing Responsible Research and Innovation for Robots
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.This paper develops a framework for responsible
research and innovation (RRI) in robot design for roboticists
from a study of the processes involved in the design and
engineering of a range of robots including standard
manufacturing robots, humanoid robots, environmental scanning
robots and robot swarms. The importance of an iterative
approach to design, the nature of transitions between design
phases, and issues of uncertainty and complexity are examined
for their ethical content. A cycle of RRI thinking based on
reconnoitre, realisation, reflection, response and review is
described which aligns with the general characterisation of robot
engineering processes. Additionally the importance of supporting
communities, knowledge bases and tools for assessment and
analysis is noted
Ethical Considerations and Trustworthy Industrial AI Systems
The ethics of AI in industrial environments is a new field within applied ethics, with notable dynamics but no well-established issues and no standard overviews. It poses many more challenges than similar consumer and general business applications, and the digital transformation of industrial sectors has brought into the ethical picture even more considerations to address. This relates to integrating AI and autonomous learning machines based on neural networks, genetic algorithms, and agent architectures into manufacturing processes.
This article presents the ethical challenges in industrial environments and the implications of developing, implementing, and deploying AI technologies and applications in industrial sectors in terms of complexity, energy demands, and environmental and climate changes.
It also gives an overview of the ethical considerations concerning digitising industry and ways of addressing them, such as potential impacts of AI on economic growth and productivity, workforce, digital divide, alignment with trustworthiness, transparency, and fairness.
Additionally, potential issues concerning the concentration of AI technology within only a few companies, human-machine relationships, and behavioural and operational misconduct involving AI are examined.
Manufacturers, designers, owners, and operators of AI—as part of autonomy and autonomous industrial systems—can be held responsible if harm is caused. Therefore, the need for accountability is also addressed, particularly related to industrial applications with non-functional requirements such as safety, security, reliability, and maintainability supporting the means of AI-based technologies and applications to be auditable via an assessment either internally or by a third party. This requires new standards and certification schemes that allow AI systems to be assessed objectively for compliance and results to be repeatable and reproducible.
This article is based on work, findings, and many discussions within the context of the AI4DI project.publishedVersio
Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities
Robotics and Artificial Intelligence (AI) have been inextricably intertwined
since their inception. Today, AI-Robotics systems have become an integral part
of our daily lives, from robotic vacuum cleaners to semi-autonomous cars. These
systems are built upon three fundamental architectural elements: perception,
navigation and planning, and control. However, while the integration of
AI-Robotics systems has enhanced the quality our lives, it has also presented a
serious problem - these systems are vulnerable to security attacks. The
physical components, algorithms, and data that make up AI-Robotics systems can
be exploited by malicious actors, potentially leading to dire consequences.
Motivated by the need to address the security concerns in AI-Robotics systems,
this paper presents a comprehensive survey and taxonomy across three
dimensions: attack surfaces, ethical and legal concerns, and Human-Robot
Interaction (HRI) security. Our goal is to provide users, developers and other
stakeholders with a holistic understanding of these areas to enhance the
overall AI-Robotics system security. We begin by surveying potential attack
surfaces and provide mitigating defensive strategies. We then delve into
ethical issues, such as dependency and psychological impact, as well as the
legal concerns regarding accountability for these systems. Besides, emerging
trends such as HRI are discussed, considering privacy, integrity, safety,
trustworthiness, and explainability concerns. Finally, we present our vision
for future research directions in this dynamic and promising field
- …