236 research outputs found
Interaction primitives for human-robot cooperation tasks
To engage in cooperative activities with human
partners, robots have to possess basic interactive abilities
and skills. However, programming such interactive skills is a
challenging task, as each interaction partner can have different
timing or an alternative way of executing movements. In this
paper, we propose to learn interaction skills by observing how
two humans engage in a similar task. To this end, we introduce
a new representation called Interaction Primitives. Interaction
primitives build on the framework of dynamic motor primitives
(DMPs) by maintaining a distribution over the parameters of
the DMP. With this distribution, we can learn the inherent
correlations of cooperative activities which allow us to infer the
behavior of the partner and to participate in the cooperation.
We will provide algorithms for synchronizing and adapting the
behavior of humans and robots during joint physical activities
Variable stiffness robotic hand for stable grasp and flexible handling
Robotic grasping is a challenging area in the field of robotics. When interacting with an object, the dynamic properties of the object will play an important role where a gripper (as a system), which has been shown to be stable as per appropriate stability criteria, can become unstable when coupled to an object. However, including a sufficiently compliant element within the actuation system of the robotic hand can increase the stability of the grasp in the presence of uncertainties. This paper deals with an innovative robotic variable stiffness hand design, VSH1, for industrial applications. The main objective of this work is to realise an affordable, as well as durable, adaptable, and compliant gripper for industrial environments with a larger interval of stiffness variability than similar existing systems. The driving system for the proposed hand consists of two servo motors and one linear spring arranged in a relatively simple fashion. Having just a single spring in the actuation system helps us to achieve a very small hysteresis band and represents a means by which to rapidly control the stiffness. We prove, both mathematically and experimentally, that the proposed model is characterised by a broad range of stiffness. To control the grasp, a first-order sliding mode controller (SMC) is designed and presented. The experimental results provided will show how, despite the relatively simple implementation of our first prototype, the hand performs extremely well in terms of both stiffness variability and force controllability
Recommended from our members
Control Implementation of Dynamic Locomotion on Compliant, Underactuated, Force-Controlled Legged Robots with Non-Anthropomorphic Design
The control of locomotion on legged robots traditionally involves a robot that takes a standard legged form, such as the anthropomorphic humanoid, the dog-like quadruped, or the bird-like biped. Additionally, these systems will often be actuated with position-controlled servos or series-elastic actuators that are connected through rigid links. This work investigates the control implementation of dynamic, force-controlled locomotion on a family of legged systems that significantly deviate from these classic paradigms by incorporating modern, state-of-the-art proprioceptive actuators on uniquely configured compliant legs that do not closely resemble those found in nature. The results of this work can be used to better inform how to implement controllers on legged systems without stiff, position-controlled actuators, and also provide insight on how intelligently designed mechanical features can potentially simplify the control of complex, nonlinear dynamical systems like legged robots. To this end, this work presents the approach to control for a family of non-anthropomorphic bipedal robotic systems which are developed both in simulation and with physical hardware. The first is the Non-Anthropomorphic Biped, Version 1 (NABi-1) that features position-controlled joints along with a compliant foot element on a minimally actuated leg, and is controlled using simple open-loop trajectories based on the Zero Moment Point. The second system is the second version of the non-anthropomorphic biped (NABi-2) which utilizes the proprioceptive Back-drivable Electromagnetic Actuator for Robotics (BEAR) modules for actuation and fully realizes feedback-based force controlled locomotion. These systems are used to highlight both the strengths and weaknesses of utilizing proprioceptive actuation in systems, and suggest the tradeoffs that are made when using force control for dynamic locomotion. These systems also present case studies for different approaches to system design when it comes to bipedal legged robots
Sensors for Robotic Hands: A Survey of State of the Art
Recent decades have seen significant progress in the field of artificial hands. Most of the
surveys, which try to capture the latest developments in this field, focused on actuation and control systems of these devices. In this paper, our goal is to provide a comprehensive survey of the sensors for artificial hands. In order to present the evolution of the field, we cover five year periods starting at the turn of the millennium. At each period, we present the robot hands with a focus on their sensor systems dividing them into categories, such as prosthetics, research devices, and industrial end-effectors.We also cover the sensors developed for robot hand usage in each era. Finally, the period between 2010 and 2015 introduces the reader to the state of the art and also hints to the future directions in the sensor development for artificial hands
Review of the techniques used in motor‐cognitive human‐robot skill transfer
Abstract A conventional robot programming method extensively limits the reusability of skills in the developmental aspect. Engineers programme a robot in a targeted manner for the realisation of predefined skills. The low reusability of general‐purpose robot skills is mainly reflected in inability in novel and complex scenarios. Skill transfer aims to transfer human skills to general‐purpose manipulators or mobile robots to replicate human‐like behaviours. Skill transfer methods that are commonly used at present, such as learning from demonstrated (LfD) or imitation learning, endow the robot with the expert's low‐level motor and high‐level decision‐making ability, so that skills can be reproduced and generalised according to perceived context. The improvement of robot cognition usually relates to an improvement in the autonomous high‐level decision‐making ability. Based on the idea of establishing a generic or specialised robot skill library, robots are expected to autonomously reason about the needs for using skills and plan compound movements according to sensory input. In recent years, in this area, many successful studies have demonstrated their effectiveness. Herein, a detailed review is provided on the transferring techniques of skills, applications, advancements, and limitations, especially in the LfD. Future research directions are also suggested
Show, Attend and Interact: Perceivable Human-Robot Social Interaction through Neural Attention Q-Network
For a safe, natural and effective human-robot social interaction, it is
essential to develop a system that allows a robot to demonstrate the
perceivable responsive behaviors to complex human behaviors. We introduce the
Multimodal Deep Attention Recurrent Q-Network using which the robot exhibits
human-like social interaction skills after 14 days of interacting with people
in an uncontrolled real world. Each and every day during the 14 days, the
system gathered robot interaction experiences with people through a
hit-and-trial method and then trained the MDARQN on these experiences using
end-to-end reinforcement learning approach. The results of interaction based
learning indicate that the robot has learned to respond to complex human
behaviors in a perceivable and socially acceptable manner.Comment: 7 pages, 5 figures, accepted by IEEE-RAS ICRA'1
Social Cognition for Human-Robot Symbiosis—Challenges and Building Blocks
The next generation of robot companions or robot working partners will need to satisfy social requirements somehow similar to the famous laws of robotics envisaged by Isaac Asimov time ago (Asimov, 1942). The necessary technology has almost reached the required level, including sensors and actuators, but the cognitive organization is still in its infancy and is only partially supported by the current understanding of brain cognitive processes. The brain of symbiotic robots will certainly not be a “positronic” replica of the human brain: probably, the greatest part of it will be a set of interacting computational processes running in the cloud. In this article, we review the challenges that must be met in the design of a set of interacting computational processes as building blocks of a cognitive architecture that may give symbiotic capabilities to collaborative robots of the next decades: (1) an animated body-schema; (2) an imitation machinery; (3) a motor intentions machinery; (4) a set of physical interaction mechanisms; and (5) a shared memory system for incremental symbiotic development. We would like to stress that our approach is totally un-hierarchical: the five building blocks of the shared cognitive architecture are fully bi-directionally connected. For example, imitation and intentional processes require the “services” of the animated body schema which, on the other hand, can run its simulations if appropriately prompted by imitation and/or intention, with or without physical interaction. Successful experiences can leave a trace in the shared memory system and chunks of memory fragment may compete to participate to novel cooperative actions. And so on and so forth. At the heart of the system is lifelong training and learning but, different from the conventional learning paradigms in neural networks, where learning is somehow passively imposed by an external agent, in symbiotic robots there is an element of free choice of what is worth learning, driven by the interaction between the robot and the human partner. The proposed set of building blocks is certainly a rough approximation of what is needed by symbiotic robots but we believe it is a useful starting point for building a computational framework
- …