589 research outputs found
European regulatory framework for person carrier robots
The aim of this paper is to establish the grounds for a future regulatory framework for Person Carrier Robots, which includes legal and ethical aspects. Current industrial standards focus on physical human–robot interaction, i.e. on the prevention of harm. Current robot technology nonetheless challenges other aspects in the legal domain. The main issues comprise privacy, data protection, liability, autonomy, dignity, and ethics. The paper first discusses the need to take into account other interdisciplinary aspects of robot technology to offer complete legal coverage to citizens. As the European Union starts using impact assessment methodology for completing new technologies regulations, a new methodology based on it to approach the insertion of personal care robots will be discussed. Then, after framing the discussion with a use case, analysis of the involved legal challenges will be conducted. Some concrete scenarios will contribute to easing the explanatory analysis
RE-MOVE: An Adaptive Policy Design Approach for Dynamic Environments via Language-Based Feedback
Reinforcement learning-based policies for continuous control robotic
navigation tasks often fail to adapt to changes in the environment during
real-time deployment, which may result in catastrophic failures. To address
this limitation, we propose a novel approach called RE-MOVE (\textbf{RE}quest
help and \textbf{MOVE} on), which uses language-based feedback to adjust
trained policies to real-time changes in the environment. In this work, we
enable the trained policy to decide \emph{when to ask for feedback} and
\emph{how to incorporate feedback into trained policies}. RE-MOVE incorporates
epistemic uncertainty to determine the optimal time to request feedback from
humans and uses language-based feedback for real-time adaptation. We perform
extensive synthetic and real-world evaluations to demonstrate the benefits of
our proposed approach in several test-time dynamic navigation scenarios. Our
approach enable robots to learn from human feedback and adapt to previously
unseen adversarial situations
Learning-Aware Safety for Interactive Autonomy
One of the outstanding challenges for the widespread deployment of robotic
systems like autonomous vehicles is ensuring safe interaction with humans
without sacrificing efficiency. Existing safety analysis methods often neglect
the robot's ability to learn and adapt at runtime, leading to overly
conservative behavior. This paper proposes a new closed-loop paradigm for
synthesizing safe control policies that explicitly account for the system's
evolving uncertainty under possible future scenarios. The formulation reasons
jointly about the physical dynamics and the robot's learning algorithm, which
updates its internal belief over time. We leverage adversarial deep
reinforcement learning (RL) for scaling to high dimensions, enabling tractable
safety analysis even for implicit learning dynamics induced by state-of-the-art
prediction models. We demonstrate our framework's ability to work with both
Bayesian belief propagation and the implicit learning induced by a large
pre-trained neural trajectory predictor.Comment: Conference on Robot Learning 202
Experiments in artificial theory of mind: From safety to story-telling
© 2018 Winfield. Theory of mind is the term given by philosophers and psychologists for the ability to form a predictive model of self and others. In this paper we focus on synthetic models of theory of mind. We contend firstly that such models-especially when tested experimentally-can provide useful insights into cognition, and secondly that artificial theory of mind can provide intelligent robots with powerful new capabilities, in particular social intelligence for human-robot interaction. This paper advances the hypothesis that simulation-based internal models offer a powerful and realisable, theory-driven basis for artificial theory of mind. Proposed as a computational model of the simulation theory of mind, our simulation-based internal model equips a robot with an internal model of itself and its environment, including other dynamic actors, which can test (i.e., simulate) the robot's next possible actions and hence anticipate the likely consequences of those actions both for itself and others. Although it falls far short of a full artificial theory of mind, our model does allow us to test several interesting scenarios: in some of these a robot equipped with the internal model interacts with other robots without an internal model, but acting as proxy humans; in others two robots each with a simulation-based internal model interact with each other. We outline a series of experiments which each demonstrate some aspect of artificial theory of mind
Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities
Robotics and Artificial Intelligence (AI) have been inextricably intertwined
since their inception. Today, AI-Robotics systems have become an integral part
of our daily lives, from robotic vacuum cleaners to semi-autonomous cars. These
systems are built upon three fundamental architectural elements: perception,
navigation and planning, and control. However, while the integration of
AI-Robotics systems has enhanced the quality our lives, it has also presented a
serious problem - these systems are vulnerable to security attacks. The
physical components, algorithms, and data that make up AI-Robotics systems can
be exploited by malicious actors, potentially leading to dire consequences.
Motivated by the need to address the security concerns in AI-Robotics systems,
this paper presents a comprehensive survey and taxonomy across three
dimensions: attack surfaces, ethical and legal concerns, and Human-Robot
Interaction (HRI) security. Our goal is to provide users, developers and other
stakeholders with a holistic understanding of these areas to enhance the
overall AI-Robotics system security. We begin by surveying potential attack
surfaces and provide mitigating defensive strategies. We then delve into
ethical issues, such as dependency and psychological impact, as well as the
legal concerns regarding accountability for these systems. Besides, emerging
trends such as HRI are discussed, considering privacy, integrity, safety,
trustworthiness, and explainability concerns. Finally, we present our vision
for future research directions in this dynamic and promising field
- …