83,686 research outputs found
RoboChain: A Secure Data-Sharing Framework for Human-Robot Interaction
Robots have potential to revolutionize the way we interact with the world
around us. One of their largest potentials is in the domain of mobile health
where they can be used to facilitate clinical interventions. However, to
accomplish this, robots need to have access to our private data in order to
learn from these data and improve their interaction capabilities. Furthermore,
to enhance this learning process, the knowledge sharing among multiple robot
units is the natural step forward. However, to date, there is no
well-established framework which allows for such data sharing while preserving
the privacy of the users (e.g., the hospital patients). To this end, we
introduce RoboChain - the first learning framework for secure, decentralized
and computationally efficient data and model sharing among multiple robot units
installed at multiple sites (e.g., hospitals). RoboChain builds upon and
combines the latest advances in open data access and blockchain technologies,
as well as machine learning. We illustrate this framework using the example of
a clinical intervention conducted in a private network of hospitals.
Specifically, we lay down the system architecture that allows multiple robot
units, conducting the interventions at different hospitals, to perform
efficient learning without compromising the data privacy.Comment: 7 pages, 6 figure
The Usage and Evaluation of Anthropomorphic Form in Robot Design
There are numerous examples illustrating the application of human shape in everyday products. Usage of anthropomorphic form has long been a basic design strategy, particularly in the design of intelligent service robots. As such, it is desirable to use anthropomorphic form not only in aesthetic design but also in interaction design. Proceeding from how anthropomorphism in various domains has taken effect on human perception, we assumed that anthropomorphic form used in appearance and interaction design of robots enriches the explanation of its function and creates familiarity with robots. From many cases we have found, misused anthropomorphic form lead to user disappointment or negative impressions on the robot. In order to effectively use anthropomorphic form, it is necessary to measure the similarity of an artifact to the human form (humanness), and then evaluate whether the usage of anthropomorphic form fits the artifact. The goal of this study is to propose a general evaluation framework of anthropomorphic form for robot design. We suggest three major steps for framing the evaluation: 'measuring anthropomorphic form in appearance', 'measuring anthropomorphic form in Human-Robot Interaction', and 'evaluation of accordance of two former measurements'. This evaluation process will endow a robot an amount of humanness in their appearance equivalent to an amount of humanness in interaction ability, and then ultimately facilitate user satisfaction.
Keywords:
Anthropomorphic Form; Anthropomorphism; Human-Robot Interaction; Humanness; Robot Design</p
Collision Detection and Reaction: A Contribution to Safe Physical Human-Robot Interaction
In the framework of physical Human-Robot Interaction
(pHRI), methodologies and experimental tests are
presented for the problem of detecting and reacting to collisions
between a robot manipulator and a human being. Using a
lightweight robot that was especially designed for interactive
and cooperative tasks, we show how reactive control strategies
can significantly contribute to ensuring safety to the human
during physical interaction. Several collision tests were carried
out, illustrating the feasibility and effectiveness of the proposed
approach. While a subjective “safety” feeling is experienced by
users when being able to naturally stop the robot in autonomous
motion, a quantitative analysis of different reaction strategies
was lacking. In order to compare these strategies on an objective
basis, a mechanical verification platform has been built. The
proposed collision detection and reactions methods prove to
work very reliably and are effective in reducing contact forces
far below any level which is dangerous to humans. Evaluations
of impacts between robot and human arm or chest up to a
maximum robot velocity of 2.7 m/s are presented
A Framework of Hybrid Force/Motion Skills Learning for Robots
Human factors and human-centred design philosophy are highly desired in today’s robotics applications such as human-robot interaction (HRI). Several studies showed that endowing robots of human-like interaction skills can not only make them more likeable but also improve their performance. In particular, skill transfer by imitation learning can increase usability and acceptability of robots by the users without computer programming skills. In fact, besides positional information, muscle stiffness of the human arm, contact force with the environment also play important roles in understanding and generating human-like manipulation behaviours for robots, e.g., in physical HRI and tele-operation. To this end, we present a novel robot learning framework based on Dynamic Movement Primitives (DMPs), taking into consideration both the positional and the contact force profiles for human-robot skills transferring. Distinguished from the conventional method involving only the motion information, the proposed framework combines two sets of DMPs, which are built to model the motion trajectory and the force variation of the robot manipulator, respectively. Thus, a hybrid force/motion control approach is taken to ensure the accurate tracking and reproduction of the desired positional and force motor skills. Meanwhile, in order to simplify the control system, a momentum-based force observer is applied to estimate the contact force instead of employing force sensors. To deploy the learned motion-force robot manipulation skills to a broader variety of tasks, the generalization of these DMP models in actual situations is also considered. Comparative experiments have been conducted using a Baxter Robot to verify the effectiveness of the proposed learning framework on real-world scenarios like cleaning a table
Learning and Reasoning for Robot Sequential Decision Making under Uncertainty
Robots frequently face complex tasks that require more than one action, where
sequential decision-making (SDM) capabilities become necessary. The key
contribution of this work is a robot SDM framework, called LCORPP, that
supports the simultaneous capabilities of supervised learning for passive state
estimation, automated reasoning with declarative human knowledge, and planning
under uncertainty toward achieving long-term goals. In particular, we use a
hybrid reasoning paradigm to refine the state estimator, and provide
informative priors for the probabilistic planner. In experiments, a mobile
robot is tasked with estimating human intentions using their motion
trajectories, declarative contextual knowledge, and human-robot interaction
(dialog-based and motion-based). Results suggest that, in efficiency and
accuracy, our framework performs better than its no-learning and no-reasoning
counterparts in office environment.Comment: In proceedings of 34th AAAI conference on Artificial Intelligence,
202
- …