11 research outputs found

    Trajectory reconstruction for robot programming by demonstration

    Get PDF
    The reproduction of hand movements by a robot remains difficult and conventional learning methods do not allow us to faithfully recreate these movements because it is very difficult when the number of crossing points is very large. Programming by Demonstration gives a better opportunity for solving this problem by tracking the user’s movements with a motion capture system and creating a robotic program to reproduce the performed tasks. This paper presents a Programming by Demonstration system in a trajectory level for the reproduction of hand/tool movement by a manipulator robot; this was realized by tracking the user’s movement with the ArToolkit and reconstructing the trajectories by using the constrained cubic spline. The results obtained with the constrained cubic spline were compared with cubic spline interpolation. Finally the obtained trajectories have been simulated in a virtual environment on the Puma 600 robot

    Symbolic-based recognition of contact states for learning assembly skills

    Get PDF
    Imitation learning is gaining more attention because it enables robots to learn skills from human demonstrations. One of the major industrial activities that can benefit from imitation learning is the learning of new assembly processes. An essential characteristic of an assembly skill is its different contact states (CS). They determine how to adjust movements in order to perform the assembly task successfully. Humans can recognise CSs through haptic feedback. They execute complex assembly tasks accordingly. Hence, CSs are generally recognised using force and torque information. This process is not straightforward due to the variations in assembly tasks, signal noise and ambiguity in interpreting force/torque (F/T) information. In this research, an investigation has been conducted to recognise the CSs during an assembly process with a geometrical variation on the mating parts. The F/T data collected from several human trials were pre-processed, segmented and represented as symbols. Those symbols were used to train a probabilistic model. Then, the trained model was validated using unseen datasets. The primary goal of the proposed approach aims to improve recognition accuracy and reduce the computational effort by employing symbolic and probabilistic approaches. The model successfully recognised CS based only on force information. This shows that such models can assist in imitation learning.</div

    Framework for real-time, autonomous anomaly detection over voluminous time-series geospatial data streams, A

    Get PDF
    2014 Summer.Includes bibliographical references.In this research work we present an approach encompassing both algorithm and system design to detect anomalies in data streams. Individual observations within these streams are multidimensional, with each dimension corresponding to a feature of interest. We consider time-series geospatial datasets generated by remote and in situ observational devices. Three aspects make this problem particularly challenging: (1) the cumulative volume and rates of data arrivals, (2) anomalies evolve over time, and (3) there are spatio-temporal correlations associated with the data. Therefore, anomaly detections must be accurate and performed in real time. Given the data volumes involved, solutions must minimize user intervention and be amenable to distributed processing to ensure scalability. Our approach achieves accurate, high throughput classications in real time. We rely on Expectation Maximization (EM) to build Gaussian Mixture Models (GMMs) that model the densities of the training data. Rather than one all-encompassing model, our approach involves multiple model instances, each of which is responsible for a particular geographical extent and can also adapt as data evolves. We have incorporated these algorithms into our distributed storage platform, Galileo, and proled their suitability through empirical analysis which demonstrates high throughput (10,000 observations per-second, per-node) and low latency on real-world datasets

    A Probabilistic Programming by Demonstration Framework Handling Constraints in Joint Space and Task Space

    Get PDF
    Abstract — We present a probabilistic architecture for solving generically the problem of extracting the task constraints through a Programming by Demonstration (PbD) framework and for generalizing the acquired knowledge to various situations. In previous work, we proposed an approach based on Gaussian Mixture Regression (GMR) to find a controller for the robot reproducing the essential characteristics of a skill in joint space and in task space through Lagrange optimization. In this paper, we extend this approach to a more generic procedure handling simultaneously constraints in joint space and in task space by combining directly the probabilistic representation of the task constraints with a simple Jacobian-based inverse kinematics solution. Experiments with two 5-DOFs Katana robots are presented with manipulation tasks that consist of handling and displacing a set of objects. I

    Robot Learning by observing human actions

    Get PDF
    Nowadays, robotics is entering in our life. One can see robot in industries, offices and even in homes. The more robots are in contact with people, the more requests of new capabilities and new features increase, in order to make robots able to act in case of need, help humans or be a companion. Therefore, it becomes essential to have a quick and easy way to teach new skills to robots. That is the aim of Robot Learning from Demonstration. This paradigm allows to directly program new tasks in a robot through demonstrations. This thesis proposes a novel approach to Robot Learning from Demonstration able to learn new skills from natural demonstrations carried out from naive users. To this aim, we introduce a novel Robot Learning from Demonstration framework by proposing novel approaches in all functional sub-units: from data acquisition to motion elaboration, from information modeling to robot control. A novel method is explained to extract 3D motion flow information from both RGB and depth data acquired by using recently introduced consumer RGB-D cameras. The motion data are computed over the time to recognize and classify human actions. In this thesis, we describe new techniques to remap human motion to robotic joints. Our methods allow people to natural interact with robots by re-targeting the whole body movements in an intuitive way. We develop algorithm for both humanoids and manipulators motion and test them in different situations. Finally, we improve modeling techniques by using a probabilistic method: the Donut Mixture Model. This model is able to manage several interpretations that different people can produce performing a task. The estimated model can also be updated directly by using new attempts carried out by the robot. This feature is very important to rapidly obtain correct robot trajectories by means of few human demonstrations. A further contribution of this thesis is the creation of a number of new virtual models for the different robots we used to test our algorithms. All the developed models are compliant with ROS, so they can be used to foster research in the field from all the community of this very diffuse robotics framework. Moreover, a new 3D dataset is collected to compare different action recognition algorithms. The dataset contains both RGB-D information coming directly from the sensor and skeleton data provided by a skeleton tracker

    Robotic learning of force-based industrial manipulation tasks

    Get PDF
    Even with the rapid technological advancements, robots are still not the most comfortable machines to work with. Firstly, due to the separation of the robot and human workspace which imposes an additional financial burden. Secondly, due to the significant re-programming cost in case of changing products, especially in Small and Medium-sized Enterprises (SMEs). Therefore, there is a significant need to reduce the programming efforts required to enable robots to perform various tasks while sharing the same space with a human operator. Hence, the robot must be equipped with a cognitive and perceptual capabilities that facilitate human-robot interaction. Humans use their various senses to perform tasks such as vision, smell and taste. One sensethat plays a significant role in human activity is ’touch’ or ’force’. For example, holding a cup of tea, or making fine adjustments while inserting a key requires haptic information to achieve the task successfully. In all these examples, force and torque data are crucial for the successful completion of the activity. Also, this information implicitly conveys data about contact force, object stiffness, and many others. Hence, a deep understanding of the execution of such events can bridge the gap between humans and robots. This thesis is being directed to equip an industrial robot with the ability to deal with force perceptions and then learn force-based tasks using Learning from Demonstration (LfD).To learn force-based tasks using LfD, it is essential to extract task-relevant features from the force information. Then, knowledge must be extracted and encoded form the task-relevant features. Hence, the captured skills can be reproduced in a new scenario. In this thesis, these elements of LfD were achieved using different approaches based on the demonstrated task. In this thesis, four robotics problems were addressed using LfD framework. The first challenge was to filter out robots’ internal forces (irrelevant signals) using data-driven approach. The second robotics challenge was the recognition of the Contact State (CS) during assembly tasks. To tackle this challenge, a symbolic based approach was proposed, in which a force/torque signals; during demonstrated assembly, the task was encoded as a sequence of symbols. The third challenge was to learn a human-robot co-manipulation task based on LfD. In this case, an ensemble machine learning approach was proposed to capture such a skill. The last challenge in this thesis, was to learn an assembly task by demonstration with the presents of parts geometrical variation. Hence, a new learning approach based on Artificial Potential Field (APF) to learn a Peg-in-Hole (PiH) assembly task which includes no-contact and contact phases. To sum up, this thesis focuses on the use of data-driven approaches to learning force based task in an industrial context. Hence, different machine learning approaches were implemented, developed and evaluated in different scenarios. Then, the performance of these approaches was compared with mathematical modelling based approaches.</div
    corecore