41 research outputs found

    Manipulation primitives: A paradigm for abstraction and execution of grasping and manipulation tasks

    Get PDF
    Sensor-based reactive and hybrid approaches have proven a promising line of study to address imperfect knowledge in grasping and manipulation. However the reactive approaches are usually tightly coupled to a particular embodiment making transfer of knowledge difficult. This paper proposes a paradigm for modeling and execution of reactive manipulation actions, which makes knowledge transfer to different embodiments possible while retaining the reactive capabilities of the embodiments. The proposed approach extends the idea of control primitives coordinated by a state machine by introducing an embodiment independent layer of abstraction. Abstract manipulation primitives constitute a vocabulary of atomic, embodiment independent actions, which can be coordinated using state machines to describe complex actions. To obtain embodiment specific models, the abstract state machines are automatically translated to embodiment specific models, such that full capabilities of each platform can be utilized. The strength of the manipulation primitives paradigm is demonstrated by developing a set of corresponding embodiment specific primitives for object transport, including a complex reactive grasping primitive. The robustness of the approach is experimentally studied in emptying of a box filled with several unknown objects. The embodiment independence is studied by performing a manipulation task on two different platforms using the same abstract description

    Hybrid control trajectory optimization under uncertainty

    Get PDF
    Trajectory optimization is a fundamental problem in robotics. While optimization of continuous control trajectories is well developed, many applications require both discrete and continuous, i.e. hybrid controls. Finding an optimal sequence of hybrid controls is challenging due to the exponential explosion of discrete control combinations. Our method, based on Differential Dynamic Programming (DDP), circumvents this problem by incorporating discrete actions inside DDP: we first optimize continuous mixtures of discrete actions, and, subsequently force the mixtures into fully discrete actions. Moreover, we show how our approach can be extended to partially observable Markov decision processes (POMDPs) for trajectory planning under uncertainty. We validate the approach in a car driving problem where the robot has to switch discrete gears and in a box pushing application where the robot can switch the side of the box to push. The pose and the friction parameters of the pushed box are initially unknown and only indirectly observable

    A probabilistic framework for learning geometry-based robot manipulation skills

    Get PDF
    Programming robots to perform complex manipulation tasks is difficult because many tasks require sophisticated controllers that may rely on data such as manipulability ellipsoids, stiffness/damping and inertia matrices. Such data are naturally represented as Symmetric Positive Definite (SPD) matrices to capture specific geometric characteristics of the data, which increases the complexity of hard-coding them. To alleviate this difficulty, the Learning from Demonstration (LfD) paradigm can be used in order to learn robot manipulation skills with specific geometric constraints encapsulated in SPD matrices. Learned skills often need to be adapted when they are applied to new situations. While existing techniques can adapt Cartesian and joint space trajectories described by various desired points, the adaptation of motion skills encapsulated in SPD matrices remains an open problem. In this paper, we introduce a new LfD framework that can learn robot manipulation skills encapsulated in SPD matrices from expert demonstrations and adapt them to new situations defined by new start-, via- and end-matrices. The proposed approach leverages Kernelized Movement Primitives (KMPs) to generate SPD-based robot manipulation skills that smoothly adapt the demonstrations to conform to new constraints. We validate the proposed framework using a couple of simulations in addition to a real experiment scenario

    A framework for generating tunable test functions for multimodal optimization

    No full text
    Multimodal function optimization, where the aim is to locate more than one solution, has attracted growing interest especially in the evolutionary computing research community. To evaluate experimentally the strengths and weaknesses of multimodal optimization algorithms, it is important to use test functions representing different characteristics and various levels of difficulty. The available selection of multimodal test problems is, however, rather limited and no general framework exists. This paper describes an attempt to construct a software framework which includes a variety of easily tunable test functions. The aim is to provide a general and easily expandable environment for testing different methods of multimodal optimization. Several function families with different characteristics are included. The framework implements new parameterizable function families for generating desired landscapes. Additionally the framework implements a selection of well known test functions from the literature, which can be rotated and stretched. The software module can easily be imported to any optimization algorithm implementation compatible with the C programming language. As an application example, 8 optimization approaches are compared by their ability to locate several global optima over a set of 16 functions with different properties generated by the proposed module. The effects of function regularity, dimensionality and number of local optima on the performance of different algorithms are studied
    corecore