3,548 research outputs found
Comparative evaluation of approaches in T.4.1-4.3 and working definition of adaptive module
The goal of this deliverable is two-fold: (1) to present and compare different approaches towards learning and encoding movements us- ing dynamical systems that have been developed by the AMARSi partners (in the past during the first 6 months of the project), and (2) to analyze their suitability to be used as adaptive modules, i.e. as building blocks for the complete architecture that will be devel- oped in the project. The document presents a total of eight approaches, in two groups: modules for discrete movements (i.e. with a clear goal where the movement stops) and for rhythmic movements (i.e. which exhibit periodicity). The basic formulation of each approach is presented together with some illustrative simulation results. Key character- istics such as the type of dynamical behavior, learning algorithm, generalization properties, stability analysis are then discussed for each approach. We then make a comparative analysis of the different approaches by comparing these characteristics and discussing their suitability for the AMARSi project
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation
Manipulation of deformable objects, such as ropes and cloth, is an important
but challenging problem in robotics. We present a learning-based system where a
robot takes as input a sequence of images of a human manipulating a rope from
an initial to goal configuration, and outputs a sequence of actions that can
reproduce the human demonstration, using only monocular images as input. To
perform this task, the robot learns a pixel-level inverse dynamics model of
rope manipulation directly from images in a self-supervised manner, using about
60K interactions with the rope collected autonomously by the robot. The human
demonstration provides a high-level plan of what to do and the low-level
inverse model is used to execute the plan. We show that by combining the high
and low-level plans, the robot can successfully manipulate a rope into a
variety of target shapes using only a sequence of human-provided images for
direction.Comment: 8 pages, accepted to International Conference on Robotics and
Automation (ICRA) 201
Integrating multi-sensory input in the body model – a RNN approach to connect proprioception, visual features and motor control
Schilling M. Integrating multi-sensory input in the body model – a RNN approach to connect proprioception, visual features and motor control. In: Proceedings of the International Joint Conference on Neural Networks 2011, San Jose (CA). 2011
Diagnostic and adaptive redundant robotic planning and control
Neural networks and fuzzy logic are combined into a hierarchical structure capable of planning, diagnosis, and control for a redundant, nonlinear robotic system in a real world scenario. Throughout this work levels of this overall approach are demonstrated for a redundant robot and hand combination as it is commanded to approach, grasp, and successfully manipulate objects for a wheelchair-bound user in a crowded, unpredictable environment. Four levels of hierarchy are developed and demonstrated, from the lowest level upward: diagnostic individual motor control, optimal redundant joint allocation for trajectory planning, grasp planning with tip and slip control, and high level task planning for multiple arms and manipulated objects. Given the expectations of the user and of the constantly changing nature of processes, the robot hierarchy learns from its experiences in order to more efficiently execute the next related task, and allocate this knowledge to the appropriate levels of planning and control. The above approaches are then extended to automotive and space applications
Body models in humans, animals, and robots: mechanisms and plasticity
Humans and animals excel in combining information from multiple sensory
modalities, controlling their complex bodies, adapting to growth, failures, or
using tools. These capabilities are also highly desirable in robots. They are
displayed by machines to some extent - yet, as is so often the case, the
artificial creatures are lagging behind. The key foundation is an internal
representation of the body that the agent - human, animal, or robot - has
developed. In the biological realm, evidence has been accumulated by diverse
disciplines giving rise to the concepts of body image, body schema, and others.
In robotics, a model of the robot is an indispensable component that enables to
control the machine. In this article I compare the character of body
representations in biology with their robotic counterparts and relate that to
the differences in performance that we observe. I put forth a number of axes
regarding the nature of such body models: fixed vs. plastic, amodal vs. modal,
explicit vs. implicit, serial vs. parallel, modular vs. holistic, and
centralized vs. distributed. An interesting trend emerges: on many of the axes,
there is a sequence from robot body models, over body image, body schema, to
the body representation in lower animals like the octopus. In some sense,
robots have a lot in common with Ian Waterman - "the man who lost his body" -
in that they rely on an explicit, veridical body model (body image taken to the
extreme) and lack any implicit, multimodal representation (like the body
schema) of their bodies. I will then detail how robots can inform the
biological sciences dealing with body representations and finally, I will study
which of the features of the "body in the brain" should be transferred to
robots, giving rise to more adaptive and resilient, self-calibrating machines.Comment: 27 pages, 8 figure
- …