1,915 research outputs found
A Bio-Inspired Tensegrity Manipulator with Multi-DOF, Structurally Compliant Joints
Most traditional robotic mechanisms feature inelastic joints that are unable
to robustly handle large deformations and off-axis moments. As a result, the
applied loads are transferred rigidly throughout the entire structure. The
disadvantage of this approach is that the exerted leverage is magnified at each
subsequent joint possibly damaging the mechanism. In this paper, we present two
lightweight, elastic, bio-inspired tensegrity robotics arms which mitigate this
danger while improving their mechanism's functionality. Our solutions feature
modular tensegrity structures that function similarly to the human elbow and
the human shoulder when connected. Like their biological counterparts, the
proposed robotic joints are flexible and comply with unanticipated forces. Both
proposed structures have multiple passive degrees of freedom and four active
degrees of freedom (two from the shoulder and two from the elbow). The
structural advantages demonstrated by the joints in these manipulators
illustrate a solution to the fundamental issue of elegantly handling off-axis
compliance.Comment: IROS 201
NICOL: A Neuro-inspired Collaborative Semi-humanoid Robot that Bridges Social Interaction and Reliable Manipulation
Robotic platforms that can efficiently collaborate with humans in physical
tasks constitute a major goal in robotics. However, many existing robotic
platforms are either designed for social interaction or industrial object
manipulation tasks. The design of collaborative robots seldom emphasizes both
their social interaction and physical collaboration abilities. To bridge this
gap, we present the novel semi-humanoid NICOL, the Neuro-Inspired COLlaborator.
NICOL is a large, newly designed, scaled-up version of its well-evaluated
predecessor, the Neuro-Inspired COmpanion (NICO). NICOL adopts NICO's head and
facial expression display and extends its manipulation abilities in terms of
precision, object size, and workspace size. Our contribution in this paper is
twofold -- firstly, we introduce the design concept for NICOL, and secondly, we
provide an evaluation of NICOL's manipulation abilities by presenting a novel
extension for an end-to-end hybrid neuro-genetic visuomotor learning approach
adapted to NICOL's more complex kinematics. We show that the approach
outperforms the state-of-the-art Inverse Kinematics (IK) solvers KDL, TRACK-IK
and BIO-IK. Overall, this article presents for the first time the humanoid
robot NICOL, and contributes to the integration of social robotics and neural
visuomotor learning for humanoid robots
Towards adaptive and autonomous humanoid robots: from vision to actions
Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions
Sensors for Robotic Hands: A Survey of State of the Art
Recent decades have seen significant progress in the field of artificial hands. Most of the
surveys, which try to capture the latest developments in this field, focused on actuation and control systems of these devices. In this paper, our goal is to provide a comprehensive survey of the sensors for artificial hands. In order to present the evolution of the field, we cover five year periods starting at the turn of the millennium. At each period, we present the robot hands with a focus on their sensor systems dividing them into categories, such as prosthetics, research devices, and industrial end-effectors.We also cover the sensors developed for robot hand usage in each era. Finally, the period between 2010 and 2015 introduces the reader to the state of the art and also hints to the future directions in the sensor development for artificial hands
Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks
Autonomous robots need to interact with unknown, unstructured and changing
environments, constantly facing novel challenges. Therefore, continuous online
adaptation for lifelong-learning and the need of sample-efficient mechanisms to
adapt to changes in the environment, the constraints, the tasks, or the robot
itself are crucial. In this work, we propose a novel framework for
probabilistic online motion planning with online adaptation based on a
bio-inspired stochastic recurrent neural network. By using learning signals
which mimic the intrinsic motivation signalcognitive dissonance in addition
with a mental replay strategy to intensify experiences, the stochastic
recurrent network can learn from few physical interactions and adapts to novel
environments in seconds. We evaluate our online planning and adaptation
framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is
shown by learning unknown workspace constraints sample-efficiently from few
physical interactions while following given way points.Comment: accepted in Neural Network
Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics
“This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
09341 Abstracts Collection -- Cognition, Control and Learning for Robot Manipulation in Human Environments
From 16.08. to 21.08.2009, the Dagstuhl Seminar 09341 ``Cognition, Control and Learning for Robot Manipulation in Human Environments \u27\u27 was held
in Schloss Dagstuhl~--~Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
- …