206 research outputs found

    Nonprehensile Dynamic Manipulation: A Survey

    Get PDF
    Nonprehensile dynamic manipulation can be reason- ably considered as the most complex manipulation task. It might be argued that such a task is still rather far from being fully solved and applied in robotics. This survey tries to collect the results reached so far by the research community about planning and control in the nonprehensile dynamic manipulation domain. A discussion about current open issues is addressed as well

    Dynamic Bat-Control of a Redundant Ball Playing Robot

    Get PDF
    This thesis shows a control algorithm for coping with a ball batting task for an entertainment robot. The robot is a three jointed robot with a redundant degree of freedom and its name is Doggy . Doggy because of its dog-like costume. Design, mechanics and electronics were developed by us. DC-motors control the tooth belt driven joints, resulting in elasticities between the motor and link. Redundancy and elasticity have to be taken into account by our developed controller and are demanding control tasks. In this thesis we show the structure of the ball playing robot and how this structure can be described as a model. We distinguish two models: One model that includes a flexible bearing, the other does not. Both models are calibrated using the toolkit Sparse Least Squares on Manifolds (SLOM) - i.e. the parameters for the model are determined. Both calibrated models are compared to measurements of the real system. The model with the flexible bearing is used to implement a state estimator - based on a Kalman filter - on a microcontroller. This ensures real time estimation of the robot states. The estimated states are also compared with the measurements and are assessed. The estimated states represent the measurements well. In the core of this work we develop a Task Level Optimal Controller (TLOC), a model-predictive optimal controller based on the principles of a Linear Quadratic Regulator (LQR). We aim to play a ball back to an opponent precisely. We show how this task of playing a ball at a desired time with a desired velocity at a desired position can be embedded into the LQR principle. We use cost functions for the task description. In simulations, we show the functionality of the control concept, which consists of a linear part (on a microcontroller) and a nonlinear part (PC software). The linear part uses feedback gains which are calculated by the nonlinear part. The concept of the ball batting controller with precalculated feedback gains is evaluated on the robot. This shows successful batting motions. The entertainment aspect has been tested on the Open Campus Day at the University of Bremen and is summarized here shortly. Likewise, a jointly developed audience interaction by recognition of distinctive sounds is summarized herein. In this thesis we answer the question, if it is possible to define a rebound task for our robot within a controller and show the necessary steps for this

    Rob’s Robot: Current and Future Challenges for Humanoid Robots

    Get PDF

    Nonprehensile Manipulation of Deformable Objects: Achievements and Perspectives from the RobDyMan Project

    Get PDF
    International audienceThe goal of this work is to disseminate the results achieved so far within the RODYMAN project related to planning and control strategies for robotic nonprehensile manipulation. The project aims at advancing the state of the art of nonprehensile dynamic manipulation of rigid and deformable objects to future enhance the possibility of employing robots in anthropic environments. The final demonstrator of the RODYMAN project will be an autonomous pizza maker. This article is a milestone to highlight the lessons learned so far and pave the way towards future research directions and critical discussions

    Motion planning and control methods for nonprehensile manipulation and multi-contact locomotion tasks

    Get PDF
    Many existing works in the robotic literature deal with the problem of nonprehensile dynamic manipulation. However, a unified control framework does not exist so far. One of the ambitious goals of this Thesis is to contribute to identify planning and control frameworks solving classes of nonprehensile dynamic manipulation tasks, dealing with the non linearity of their dynamic models and, consequently, with the inherited design complexity. Besides, while passing through a number of connections between dynamic nonprehensile manipulation and legged locomotion, the Thesis presents novel methods for generating walking motions in multi-contact situations

    Visual Geometric Skill Inference by Watching Human Demonstration

    Full text link
    We study the problem of learning manipulation skills from human demonstration video by inferring the association relationships between geometric features. Motivation for this work stems from the observation that humans perform eye-hand coordination tasks by using geometric primitives to define a task while a geometric control error drives the task through execution. We propose a graph based kernel regression method to directly infer the underlying association constraints from human demonstration video using Incremental Maximum Entropy Inverse Reinforcement Learning (InMaxEnt IRL). The learned skill inference provides human readable task definition and outputs control errors that can be directly plugged into traditional controllers. Our method removes the need for tedious feature selection and robust feature trackers required in traditional approaches (e.g. feature-based visual servoing). Experiments show our method infers correct geometric associations even with only one human demonstration video and can generalize well under variance.Comment: Accepted in ICRA 202

    A Shared-Control Teleoperation Architecture for Nonprehensile Object Transportation

    Get PDF
    This article proposes a shared-control teleoperation architecture for robot manipulators transporting an object on a tray. Differently from many existing studies about remotely operated robots with firm grasping capabilities, we consider the case in which, in principle, the object can break its contact with the robot end-effector. The proposed shared-control approach automatically regulates the remote robot motion commanded by the user and the end-effector orientation to prevent the object from sliding over the tray. Furthermore, the human operator is provided with haptic cues informing about the discrepancy between the commanded and executed robot motion, which assist the operator throughout the task execution. We carried out trajectory tracking experiments employing an autonomous 7-degree-of-freedom (DoF) manipulator and compared the results obtained using the proposed approach with two different control schemes (i.e., constant tray orientation and no motion adjustment). We also carried out a human-subjects study involving 18 participants in which a 3-DoF haptic device was used to teleoperate the robot linear motion and display haptic cues to the operator. In all experiments, the results clearly show that our control approach outperforms the other solutions in terms of sliding prevention, robustness, commands tracking, and user’s preference

    Deep learning based approaches for imitation learning.

    Get PDF
    Imitation learning refers to an agent's ability to mimic a desired behaviour by learning from observations. The field is rapidly gaining attention due to recent advances in computational and communication capabilities as well as rising demand for intelligent applications. The goal of imitation learning is to describe the desired behaviour by providing demonstrations rather than instructions. This enables agents to learn complex behaviours with general learning methods that require minimal task specific information. However, imitation learning faces many challenges. The objective of this thesis is to advance the state of the art in imitation learning by adopting deep learning methods to address two major challenges of learning from demonstrations. Firstly, representing the demonstrations in a manner that is adequate for learning. We propose novel Convolutional Neural Networks (CNN) based methods to automatically extract feature representations from raw visual demonstrations and learn to replicate the demonstrated behaviour. This alleviates the need for task specific feature extraction and provides a general learning process that is adequate for multiple problems. The second challenge is generalizing a policy over unseen situations in the training demonstrations. This is a common problem because demonstrations typically show the best way to perform a task and don't offer any information about recovering from suboptimal actions. Several methods are investigated to improve the agent's generalization ability based on its initial performance. Our contributions in this area are three fold. Firstly, we propose an active data aggregation method that queries the demonstrator in situations of low confidence. Secondly, we investigate combining learning from demonstrations and reinforcement learning. A deep reward shaping method is proposed that learns a potential reward function from demonstrations. Finally, memory architectures in deep neural networks are investigated to provide context to the agent when taking actions. Using recurrent neural networks addresses the dependency between the state-action sequences taken by the agent. The experiments are conducted in simulated environments on 2D and 3D navigation tasks that are learned from raw visual data, as well as a 2D soccer simulator. The proposed methods are compared to state of the art deep reinforcement learning methods. The results show that deep learning architectures can learn suitable representations from raw visual data and effectively map them to atomic actions. The proposed methods for addressing generalization show improvements over using supervised learning and reinforcement learning alone. The results are thoroughly analysed to identify the benefits of each approach and situations in which it is most suitable
    • 

    corecore