15 research outputs found

    Grasp Stability Prediction for a Dexterous Robotic Hand Combining Depth Vision and Haptic Bayesian Exploration.

    Get PDF
    Grasp stability prediction of unknown objects is crucial to enable autonomous robotic manipulation in an unstructured environment. Even if prior information about the object is available, real-time local exploration might be necessary to mitigate object modelling inaccuracies. This paper presents an approach to predict safe grasps of unknown objects using depth vision and a dexterous robot hand equipped with tactile feedback. Our approach does not assume any prior knowledge about the objects. First, an object pose estimation is obtained from RGB-D sensing; then, the object is explored haptically to maximise a given grasp metric. We compare two probabilistic methods (i.e. standard and unscented Bayesian Optimisation) against random exploration (i.e. uniform grid search). Our experimental results demonstrate that these probabilistic methods can provide confident predictions after a limited number of exploratory observations, and that unscented Bayesian Optimisation can find safer grasps, taking into account the uncertainty in robot sensing and grasp execution

    Simultaneous Tactile Exploration and Grasp Refinement for Unknown Objects

    Full text link
    This paper addresses the problem of simultaneously exploring an unknown object to model its shape, using tactile sensors on robotic fingers, while also improving finger placement to optimise grasp stability. In many situations, a robot will have only a partial camera view of the near side of an observed object, for which the far side remains occluded. We show how an initial grasp attempt, based on an initial guess of the overall object shape, yields tactile glances of the far side of the object which enable the shape estimate and consequently the successive grasps to be improved. We propose a grasp exploration approach using a probabilistic representation of shape, based on Gaussian Process Implicit Surfaces. This representation enables initial partial vision data to be augmented with additional data from successive tactile glances. This is combined with a probabilistic estimate of grasp quality to refine grasp configurations. When choosing the next set of finger placements, a bi-objective optimisation method is used to mutually maximise grasp quality and improve shape representation during successive grasp attempts. Experimental results show that the proposed approach yields stable grasp configurations more efficiently than a baseline method, while also yielding improved shape estimate of the grasped object.Comment: IEEE Robotics and Automation Letters. Preprint Version. Accepted February, 202

    Bayesian Optimization for robust robotic grasping

    Get PDF
    Among the most complex tasks performed by humans is the manipulation of objects. In robotics, automating these tasks has applications in a variety of environments, such as the development of industrial processes or providing assistance to people with physical or motor disabilities. Using bio-inspired robotic hands is helping the emergence of increasingly robust and dexterous grasping strategies. However, the difficulty lies in adapting these strategies to the variety of tasks and objects, which can often be unknown also involving the computational overhead of identifying them and reconfiguring the grasp. The brute-force solution is to learn new grasps by trial and error. This method however is inefficient and ineffective, as it is based on pure randomness. In contrast, Bayesian optimization allows us to turn this process into active learning, where each attempt adds information to the approximation of an optimal grasp, in a manner analogous to a child learning. The present work aims to test Bayesian optimization in this context, providing some techniques to enhance its performance, and experimenting not only in simulation but also on real robots, as well as studying different grasp metrics that allow the grasp evaluation during the optimization process and how they behave when computing from a real system. For this, along this work, we implemented a realistic simulation environment using PyBullet, which emulates the real experimental environment. This work provides experimental results using the Light Weight robotic arm, designed at the German Aerospace Center (DLR), and two tridactyl robotic hands, the CLASH (DLR) and ReFlex TakkTile (Right Hand Robotic), demonstrating the usefulness of the method for performing unknown object grasping even in the presence of noise and uncertainty inherent in a real-world environment. Consequently, this work contributes with practical knowledge to the studied field and serves as a proof-of-concept for future grasp planning and robotic manipulation technology

    Experience-driven optimal motion synthesis in complex and shared environments

    Get PDF
    Optimal loco-manipulation planning and control for high-dimensional systems based on general, non-linear optimisation allows for the specification of versatile motion subject to complex constraints. However, complex, non-linear system and environment dynamics, switching contacts, and collision avoidance in cluttered environments introduce non-convexity and discontinuity in the optimisation space. This renders finding optimal solutions in complex and changing environments an open and challenging problem in robotics. Global optimisation methods can take a prohibitively long time to converge. Slow convergence makes them unsuitable for live deployment and online re-planning of motion policies in response to changes in the task or environment. Local optimisation techniques, in contrast, converge fast within the basin of attraction of a minimum but may not converge at all without a good initial guess as they can easily get stuck in local minima. Local methods are, therefore, a suitable choice provided we can supply a good initial guess. If a similarity between problems can be found and exploited, a memory of optimal solutions can be computed and compressed efficiently in an offline computation process. During runtime, we can query this memory to bootstrap motion synthesis by providing a good initial seed to the local optimisation solver. In order to realise such a system, we need to address several connected problems and questions: First, the formulation of the optimisation problem (and its parametrisation to allow solutions to transfer to new scenarios), and related, the type and granularity of user input, along with a strategy for recovery and feedback in case of unexpected changes or failure. Second, a sampling strategy during the database/memory generation that explores the parameter space efficiently without resorting to exhaustive measures---i.e., to balance storage size/memory with online runtime to adapt/repair the initial guess. Third, the question of how to represent the problem and environment to parametrise, compute, store, retrieve, and exploit the memory efficiently during pre-computation and runtime. One strategy to make the problem computationally tractable is to decompose planning into a series of sequential sub-problems, e.g., contact-before-motion approaches which sequentially perform goal state planning, contact planning, motion planning, and encoding. Here, subsequent stages operate within the null-space of the constraints of the prior problem, such as the contact mode or sequence. This doctoral thesis follows this line of work. It investigates general optimisation-based formulations for motion synthesis along with a strategy for exploration, encoding, and exploitation of a versatile memory-of-motion for providing an initial guess to optimisation solvers. In particular, we focus on manipulation in complex environments with high-dimensional robot systems such as humanoids and mobile manipulators. The first part of this thesis focuses on collision-free motion generation to reliably generate motions. We present a general, collision-free inverse kinematics method using a combination of gradient-based local optimisation with random/evolution strategy restarting to achieve high success rates and avoid local minima. We use formulations for discrete collision avoidance and introduce a novel, computationally fast continuous collision avoidance objective based on conservative advancement and harmonic potential fields. Using this, we can synthesise continuous-time collision-free motion plans in the presence of moving obstacles. It further enables to discretise trajectories with fewer waypoints, which in turn considerably reduces the optimisation problem complexity, and thus, time to solve. The second part focuses on problem representations and exploration. We first introduce an efficient solution encoding for trajectory library-based approaches. This representation, paired with an accompanying exploration strategy for offline pre-computation, permits the application of inexpensive distance metrics during runtime. We demonstrate how our method efficiently re-uses trajectory samples, increases planning success rates, and reduces planning time while being highly memory-efficient. We subsequently present a method to explore the topological features of the solution space using tools from computational homology. This enables us to cluster solutions according to their inherent structure which increases the success of warm-starting for problems with discontinuities and multi-modality. The third part focuses on real-world deployment in laboratory and field experiments as well as incorporating user input. We present a framework for robust shared autonomy with a focus on continuous scene monitoring for assured safety. This framework further supports interactive adjustment of autonomy levels from fully teleoperated to automatic execution of stored behaviour sequences. Finally, we present sensing and control for the integration and embodiment of the presented methodology in high-dimensional real-world platforms used in laboratory experiments and real-world deployment. We validate our presented methods using hardware experiments on a variety of robot platforms demonstrating generalisation to other robots and environments

    Of Priors and Particles: Structured and Distributed Approaches to Robot Perception and Control

    Get PDF
    Applications of robotic systems have expanded significantly in their scope, moving beyond the caged predictability of industrial automation and towards more open, unstructured environments. These agents must learn to reliably perceive their surroundings, efficiently integrate new information and quickly adapt to dynamic perturbations. To accomplish this, we require solutions which can effectively incorporate prior knowledge while maintaining the generality of learned representations. These systems must also contend with uncertainty in both their perception of the world and in predicting possible future outcomes. Efficient methods for probabilistic inference are then key to realizing robust, adaptive behavior. This thesis will first examine data-driven approaches for learning and combining perceptual models for both visual and tactile sensor modalities, common in robotics. Modern variational inference methods will then be examined in the context of online optimization and stochastic optimal control. Specifically, this thesis will contribute (1) data-driven visual and tactile perceptual models leveraging kinematic and dynamic priors, (2) a framework for joint inference with visuo-tactile sensing, (3) a family of particle-based, variational model predictive control and planning algorithms, and (4) a distributed inference scheme for online model adaptation.Ph.D

    Multi-Robot Systems: Challenges, Trends and Applications

    Get PDF
    This book is a printed edition of the Special Issue entitled “Multi-Robot Systems: Challenges, Trends, and Applications” that was published in Applied Sciences. This Special Issue collected seventeen high-quality papers that discuss the main challenges of multi-robot systems, present the trends to address these issues, and report various relevant applications. Some of the topics addressed by these papers are robot swarms, mission planning, robot teaming, machine learning, immersive technologies, search and rescue, and social robotics

    Robust Tactile Slip Detection, Using the uSkin Sensor Applied to Autonomous Robotic Grasping

    Get PDF
    There is a growing interest in leveraging tactile sensing and data-driven methods to generate more robust robot grasps for effective autonomous tasks (e.g. object pick-and-place). In this context, slip detection is a fundamental skill since it allows for detecting when the grasped object moves within the gripper and applying corrective actions to hold the object static. Robotic slip detection requires two components: a sensor that captures signals related to the physical interaction between a gripper and an object, and a model identifying if such data corresponds to a slip event. In this thesis, a novel magnetic-based tactile sensor, the uSkin sensor, is used. It leverages measurements of signals related to the distributed normal and shear forces to develop slip-detection methods for real-time robot-object manipulation. In the context of autonomous robotic grasping, current slip detection methods lack the generalisation capabilities to cope with a generic set of tactile interactions. Effective slip classifiers have been proposed in the literature, combining different sensing mechanical principles (e.g. vibration, force, contact region sensing) and data-driven learning strategies (e.g. Machine and Deep Learning methods). However, due to overall constraints with collecting tactile data related to slips, current solutions struggle to cope with the large variability in gripper-object interactions in typical autonomous grasping systems (e.g., different grasp poses, slip intensity). Previous work simplifies the data collection processes while also critically limiting the variety of interactions for which data is collected (e.g. only considering up to 6 grasp poses). This thesis investigates the applicability (to autonomous grasping systems) of several data-driven slip-detection models trained with newly proposed efficient data collection processes that are also representative of the variability expected in autonomous robotic grasping. First, a vision-based autonomous data collection protocol is proposed. Classical machine learning models are trained with the collected data: first, with data from one or more objects (achieving up to 61.5% F1-Score generalisation performance); later, introducing a multi-stage object detection and slip detection pipeline to detect slips from autonomous grasps (up to 52% F1-Score generalisation performance). Then, learning from the key challenges observed with the previous approaches, a controlled object and sensor-independent tactile data collection protocol is proposed to collect tactile data representative of realistic variability. Experimentally, this protocol is shown to be faster, more efficient, and more reproducible than those proposed in the literature. Models trained with such data can also generalise better to unseen conditions, enabling robust robotic pick-and-place in real-world settings
    corecore