3,021 research outputs found

    Behavior-specific proprioception models for robotic force estimation: a machine learning approach

    Get PDF
    Robots that support humans in physically demanding tasks require accurate force sensing capabilities. A common way to achieve this is by monitoring the interaction with the environment directly with dedicated force sensors. Major drawbacks of such special purpose sensors are the increased costs and the reduced payload of the robot platform. Instead, this thesis investigates how the functionality of such sensors can be approximated by utilizing force estimation approaches. Most of today’s robots are equipped with rich proprioceptive sensing capabilities where even a robotic arm, e.g., the UR5, provides access to more than hundred sensor readings. Following this trend, it is getting feasible to utilize a wide variety of sensors for force estimation purposes. Human proprioception allows estimating forces such as the weight of an object by prior experience about sensory-motor patterns. Applying a similar approach to robots enables them to learn from previous demonstrations without the need of dedicated force sensors. This thesis introduces Behavior-Specific Proprioception Models (BSPMs), a novel concept for enhancing robotic behavior with estimates of the expected proprioceptive feedback. A main methodological contribution is the operationalization of the BSPM approach using data-driven machine learning techniques. During a training phase, the behavior is continuously executed while recording proprioceptive sensor readings. The training data acquired from these demonstrations represents ground truth about behavior-specific sensory-motor experiences, i.e., the influence of performed actions and environmental conditions on the proprioceptive feedback. This data acquisition procedure does not require expert knowledge about the particular robot platform, e.g., kinematic chains or mass distribution, which is a major advantage over analytical approaches. The training data is then used to learn BSPMs, e.g. using lazy learning techniques or artificial neural networks. At runtime, the BSPMs provide estimates of the proprioceptive feedback that can be compared to actual sensations. The BSPM approach thus extends classical programming by demonstrations methods where only movement data is learned and enables robots to accurately estimate forces during behavior execution

    Robots that Anticipate Pain: Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models

    Get PDF
    abstract: To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this thesis work, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge about sources of perturbation to be encoded before deployment, our method is based on experiential learning. Robots learn to associate visual cues with subsequent physical perturbations and contacts. In turn, these extracted visual cues are then used to predict potential future perturbations acting on the robot. To this end, we introduce a novel deep network architecture which combines multiple sub- networks for dealing with robot dynamics and perceptual input from the environment. We present a self-supervised approach for training the system that does not require any labeling of training data. Extensive experiments in a human-robot interaction task show that a robot can learn to predict physical contact by a human interaction partner without any prior information or labeling. Furthermore, the network is able to successfully predict physical contact from either depth stream input or traditional video input or using both modalities as input.Dissertation/ThesisMasters Thesis Computer Science 201

    Comparative evaluation of approaches in T.4.1-4.3 and working definition of adaptive module

    Get PDF
    The goal of this deliverable is two-fold: (1) to present and compare different approaches towards learning and encoding movements us- ing dynamical systems that have been developed by the AMARSi partners (in the past during the first 6 months of the project), and (2) to analyze their suitability to be used as adaptive modules, i.e. as building blocks for the complete architecture that will be devel- oped in the project. The document presents a total of eight approaches, in two groups: modules for discrete movements (i.e. with a clear goal where the movement stops) and for rhythmic movements (i.e. which exhibit periodicity). The basic formulation of each approach is presented together with some illustrative simulation results. Key character- istics such as the type of dynamical behavior, learning algorithm, generalization properties, stability analysis are then discussed for each approach. We then make a comparative analysis of the different approaches by comparing these characteristics and discussing their suitability for the AMARSi project

    From walking to running: robust and 3D humanoid gait generation via MPC

    Get PDF
    Humanoid robots are platforms that can succeed in tasks conceived for humans. From locomotion in unstructured environments, to driving cars, or working in industrial plants, these robots have a potential that is yet to be disclosed in systematic every-day-life applications. Such a perspective, however, is opposed by the need of solving complex engineering problems under the hardware and software point of view. In this thesis, we focus on the software side of the problem, and in particular on locomotion control. The operativity of a legged humanoid is subordinate to its capability of realizing a reliable locomotion. In many settings, perturbations may undermine the balance and make the robot fall. Moreover, complex and dynamic motions might be required by the context, as for instance it could be needed to start running or climbing stairs to achieve a certain location in the shortest time. We present gait generation schemes based on Model Predictive Control (MPC) that tackle both the problem of robustness and tridimensional dynamic motions. The proposed control schemes adopt the typical paradigm of centroidal MPC for reference motion generation, enforcing dynamic balance through the Zero Moment Point condition, plus a whole-body controller that maps the generated trajectories to joint commands. Each of the described predictive controllers also feature a so-called stability constraint, preventing the generation of diverging Center of Mass trajectories with respect to the Zero Moment Point. Robustness is addressed by modeling the humanoid as a Linear Inverted Pendulum and devising two types of strategies. For persistent perturbations, a way to use a disturbance observer and a technique for constraint tightening (to ensure robust constraint satisfaction) are presented. In the case of impulsive pushes instead, techniques for footstep and timing adaptation are introduced. The underlying approach is to interpret robustness as a MPC feasibility problem, thus aiming at ensuring the existence of a solution for the constrained optimization problem to be solved at each iteration in spite of the perturbations. This perspective allows to devise simple solutions to complex problems, favoring a reliable real-time implementation. For the tridimensional locomotion, on the other hand, the humanoid is modeled as a Variable Height Inverted Pendulum. Based on it, a two stage MPC is introduced with particular emphasis on the implementation of the stability constraint. The overall result is a gait generation scheme that allows the robot to overcome relatively complex environments constituted by a non-flat terrain, with also the capability of realizing running gaits. The proposed methods are validated in different settings: from conceptual simulations in Matlab to validations in the DART dynamic environment, up to experimental tests on the NAO and the OP3 platforms

    Geometry-aware Manipulability Learning, Tracking and Transfer

    Full text link
    Body posture influences human and robots performance in manipulation tasks, as appropriate poses facilitate motion or force exertion along different axes. In robotics, manipulability ellipsoids arise as a powerful descriptor to analyze, control and design the robot dexterity as a function of the articulatory joint configuration. This descriptor can be designed according to different task requirements, such as tracking a desired position or apply a specific force. In this context, this paper presents a novel \emph{manipulability transfer} framework, a method that allows robots to learn and reproduce manipulability ellipsoids from expert demonstrations. The proposed learning scheme is built on a tensor-based formulation of a Gaussian mixture model that takes into account that manipulability ellipsoids lie on the manifold of symmetric positive definite matrices. Learning is coupled with a geometry-aware tracking controller allowing robots to follow a desired profile of manipulability ellipsoids. Extensive evaluations in simulation with redundant manipulators, a robotic hand and humanoids agents, as well as an experiment with two real dual-arm systems validate the feasibility of the approach.Comment: Accepted for publication in the Intl. Journal of Robotics Research (IJRR). Website: https://sites.google.com/view/manipulability. Code: https://github.com/NoemieJaquier/Manipulability. 24 pages, 20 figures, 3 tables, 4 appendice

    Can DMD obtain a scene background in color?

    Get PDF
    A background model describes a scene without any foreground objects and has a number of applications, ranging from video surveillance to computational photography. Recent studies have introduced the method of Dynamic Mode Decomposition (DMD) for robustly separating video frames into a background model and foreground components. While the method introduced operates by converting color images to grayscale, we in this study propose a technique to obtain the background model in the color domain. The effectiveness of our technique is demonstrated using a publicly available Scene Background Initialisation (SBI) dataset. Our results both qualitatively and quantitatively show that DMD can successfully obtain a colored background model
    • …
    corecore