1,807 research outputs found

    Human-like arm motion generation: a review

    Get PDF
    In the last decade, the objectives outlined by the needs of personal robotics have led to the rise of new biologically-inspired techniques for arm motion planning. This paper presents a literature review of the most recent research on the generation of human-like arm movements in humanoid and manipulation robotic systems. Search methods and inclusion criteria are described. The studies are analyzed taking into consideration the sources of publication, the experimental settings, the type of movements, the technical approach, and the human motor principles that have been used to inspire and assess human-likeness. Results show that there is a strong focus on the generation of single-arm reaching movements and biomimetic-based methods. However, there has been poor attention to manipulation, obstacle-avoidance mechanisms, and dual-arm motion generation. For these reasons, human-like arm motion generation may not fully respect human behavioral and neurological key features and may result restricted to specific tasks of human-robot interaction. Limitations and challenges are discussed to provide meaningful directions for future investigations.FCT Project UID/MAT/00013/2013FCT–Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020

    Learning soft task priorities for control of redundant robots

    Get PDF
    Movement primitives (MPs) provide a powerful framework for data driven movement generation that has been successfully applied for learning from demonstrations and robot reinforcement learning. In robotics we often want to solve a multitude of different, but related tasks. As the parameters of the primitives are typically high dimensional, a common practice for the generalization of movement primitives to new tasks is to adapt only a small set of control variables, also called meta parameters, of the primitive. Yet, for most MP representations, the encoding of these control variables is precoded in the representation and can not be adapted to the considered tasks. In this paper, we want to learn the encoding of task-specific control variables also from data instead of relying on fixed meta-parameter representations. We use hierarchical Bayesian models (HBMs) to estimate a low dimensional latent variable model for probabilistic movement primitives (ProMPs), which is a recent movement primitive representation. We show on two real robot datasets that ProMPs based on HBMs outperform standard ProMPs in terms of generalization and learning from a small amount of data and also allows for an intuitive analysis of the movement. We also extend our HBM by a mixture model, such that we can model different movement types in the same dataset

    Comparative evaluation of approaches in T.4.1-4.3 and working definition of adaptive module

    Get PDF
    The goal of this deliverable is two-fold: (1) to present and compare different approaches towards learning and encoding movements us- ing dynamical systems that have been developed by the AMARSi partners (in the past during the first 6 months of the project), and (2) to analyze their suitability to be used as adaptive modules, i.e. as building blocks for the complete architecture that will be devel- oped in the project. The document presents a total of eight approaches, in two groups: modules for discrete movements (i.e. with a clear goal where the movement stops) and for rhythmic movements (i.e. which exhibit periodicity). The basic formulation of each approach is presented together with some illustrative simulation results. Key character- istics such as the type of dynamical behavior, learning algorithm, generalization properties, stability analysis are then discussed for each approach. We then make a comparative analysis of the different approaches by comparing these characteristics and discussing their suitability for the AMARSi project

    Advances in humanoid control and perception

    Get PDF
    One day there will be humanoid robots among us doing our boring, time-consuming, or dangerous tasks. They might cook a delicious meal for us or do the groceries. For this to become reality, many advances need to be made to the artificial intelligence of humanoid robots. The ever-increasing available computational processing power opens new doors for such advances. In this thesis we develop novel algorithms for humanoid control and vision that harness this power. We apply these methods on an iCub humanoid upper-body with 41 degrees of freedom. For control, we develop Natural Gradient Inverse Kinematics (NGIK), a sampling-based optimiser that applies natural evolution strategies to perform inverse kinematics. The resulting algorithm makes very few assumptions and gives much more freedom in definable constraints than its Jacobian-based counterparts. A special graph-building procedure is introduced to build Task-Relevant Roadmaps (TRM) by iteratively applying NGIK and storing the results. TRMs form searchable graphs of kinematic configurations on which a wide range of task-relevant humanoid movements can be planned. Through coordinating several instances of NGIK, a fast parallelised version of the TRM building algorithm is developed. To contrast the offline TRM algorithms, we also develop Natural Gradient Control which directly uses the optimisation pass in NGIK as an online control signal. For vision, we develop dynamic vision algorithms that form cyclic information flows that affect their own processing. Deep Attention Selective Networks (dasNet) implement feedback in convolutional neural networks through a gating mechanism that is steered by a policy. Through this feedback, dasNet can focus on different features in the image in light of previously gathered information and improve classification, with state-of-the- art results at the time of publication. Then, we develop PyraMiD-LSTM, which processes 3D volumetric data by employing a novel convolutional Long Short-Term Memory network (C-LSTM) to compute pyramidal contexts for every voxel, and combine them to perform segmentation. This resulted in state-of-the-art performance on a segmentation benchmark. The work on control and vision is integrated into an application on the iCub robot. A Fast-Weight PyraMiD-LSTM is developed that dynamically generates weights for a C-LSTM layer given actions of the robot. An explorative policy using NGC generates a stream of data, which the Fast-Weight PyraMiD-LSTM has to predict. The resulting integrated system learns to model the effects of head and hand movements and their effects on future visual input. To our knowledge, this is the first effective visual prediction system on an iCub

    Multi-Character Motion Retargeting for Large Scale Changes

    Get PDF

    Using Dimensionality Reduction to Exploit Constraints in Reinforcement Learning

    Get PDF
    Reinforcement learning in the high-dimensional, continuous spaces typical in robotics, remains a challenging problem. To overcome this challenge, a popular approach has been to use demonstrations to find an appropriate initialisation of the policy in an attempt to reduce the number of iterations needed to find a solution. Here, we present an alternative way to incorporate prior knowledge from demonstrations of individual postures into learning, by extracting the inherent problem structure to find an efficient state representation. In particular, we use probabilistic, nonlinear dimensionality reduction to capture latent constraints present in the data. By learning policies in the learnt latent space, we are able to solve the planning problem in a reduced space that automatically satisfies task constraints. As shown in our experiments, this reduces the exploration needed and greatly accelerates the learning. We demonstrate our approach for learning a bimanual reaching task on the 19-DOF KHR-1HV humanoid
    corecore