980 research outputs found

    Robot Performing Peg-in-Hole Operations by Learning from Human Demonstration

    Get PDF
    This paper presents a novel approach for a robot to conduct assembly tasks, namely robot learning from human demonstrations. The learning of robotic assembly task is divided into two phases: teaching and reproduction. During the teaching phase, a wrist camera is used to scan the object on the workbench and extract its SIFT feature. The human demonstrator teaches the robot to grasp the object from the effective position and orientation. During the reproduction phase, the robot uses the learned knowledge to reproduce the grasping manipulation autonomously. The robustness of the robotic assembly system is evaluated through a series of grasping trials. The dual-arm Baxter robot is used to perform the Peg-in-Hole task by using the proposed approach. Experimental results show that the robot is able to accomplish assembly task by learning from human demonstration without traditional dedicated programming

    A survey of robot manipulation in contact

    Get PDF
    In this survey, we present the current status on robots performing manipulation tasks that require varying contact with the environment, such that the robot must either implicitly or explicitly control the contact force with the environment to complete the task. Robots can perform more and more manipulation tasks that are still done by humans, and there is a growing number of publications on the topics of (1) performing tasks that always require contact and (2) mitigating uncertainty by leveraging the environment in tasks that, under perfect information, could be performed without contact. The recent trends have seen robots perform tasks earlier left for humans, such as massage, and in the classical tasks, such as peg-in-hole, there is a more efficient generalization to other similar tasks, better error tolerance, and faster planning or learning of the tasks. Thus, in this survey we cover the current stage of robots performing such tasks, starting from surveying all the different in-contact tasks robots can perform, observing how these tasks are controlled and represented, and finally presenting the learning and planning of the skills required to complete these tasks

    TeslaCharge: Smart Robotic Charger Driven by Impedance Control and Human Haptic Patterns

    Full text link
    The growing demand for electric vehicles requires the development of automated car charging methods. At the moment, the process of charging an electric car is completely manual, and that requires physical effort to accomplish the task, which is not suitable for people with disabilities. Typically, the effort in the research is focused on detecting the position and orientation of the socket, which resulted in a relatively high accuracy, ±5 mm\pm 5 \: mm and ±10o\pm 10^o. However, this accuracy is not enough to complete the charging process. In this work, we focus on designing a novel methodology for robust robotic plug-in and plug-out based on human haptics, to overcome the error in the position and orientation of the socket. Participants were invited to perform the charging task, and their cognitive capabilities were recognized by measuring the applied forces along with the movement of the charger. Three controllers were designed based on impedance control to mimic the human patterns of charging an electric car. The recorded data from humans were used to calibrate the parameters of the impedance controllers: inertia MdM_d, damping DdD_d, and stiffness KdK_d. A robotic validation was performed, where the designed controllers were applied to the robot UR10. Using the proposed controllers and the human kinesthetic data, it was possible to successfully automate the operation of charging an electric car.Comment: Accepted to the 21st IEEE International Conference on Advanced Robotics (ICAR 2023). IEEE copyrigh

    Robotic learning of force-based industrial manipulation tasks

    Get PDF
    Even with the rapid technological advancements, robots are still not the most comfortable machines to work with. Firstly, due to the separation of the robot and human workspace which imposes an additional financial burden. Secondly, due to the significant re-programming cost in case of changing products, especially in Small and Medium-sized Enterprises (SMEs). Therefore, there is a significant need to reduce the programming efforts required to enable robots to perform various tasks while sharing the same space with a human operator. Hence, the robot must be equipped with a cognitive and perceptual capabilities that facilitate human-robot interaction. Humans use their various senses to perform tasks such as vision, smell and taste. One sensethat plays a significant role in human activity is ’touch’ or ’force’. For example, holding a cup of tea, or making fine adjustments while inserting a key requires haptic information to achieve the task successfully. In all these examples, force and torque data are crucial for the successful completion of the activity. Also, this information implicitly conveys data about contact force, object stiffness, and many others. Hence, a deep understanding of the execution of such events can bridge the gap between humans and robots. This thesis is being directed to equip an industrial robot with the ability to deal with force perceptions and then learn force-based tasks using Learning from Demonstration (LfD).To learn force-based tasks using LfD, it is essential to extract task-relevant features from the force information. Then, knowledge must be extracted and encoded form the task-relevant features. Hence, the captured skills can be reproduced in a new scenario. In this thesis, these elements of LfD were achieved using different approaches based on the demonstrated task. In this thesis, four robotics problems were addressed using LfD framework. The first challenge was to filter out robots’ internal forces (irrelevant signals) using data-driven approach. The second robotics challenge was the recognition of the Contact State (CS) during assembly tasks. To tackle this challenge, a symbolic based approach was proposed, in which a force/torque signals; during demonstrated assembly, the task was encoded as a sequence of symbols. The third challenge was to learn a human-robot co-manipulation task based on LfD. In this case, an ensemble machine learning approach was proposed to capture such a skill. The last challenge in this thesis, was to learn an assembly task by demonstration with the presents of parts geometrical variation. Hence, a new learning approach based on Artificial Potential Field (APF) to learn a Peg-in-Hole (PiH) assembly task which includes no-contact and contact phases. To sum up, this thesis focuses on the use of data-driven approaches to learning force based task in an industrial context. Hence, different machine learning approaches were implemented, developed and evaluated in different scenarios. Then, the performance of these approaches was compared with mathematical modelling based approaches.</div

    Teaching Accommodation Task Skills: from Human Demonstration to Robot Control via Artificial Neural Networks

    Get PDF
    A simple edge-mating task, performed automatically by accommodation control, was used to study the feasibility of using data collected during a human demonstration to train an artificial neural network (ANN) to control a common robot manipulator to complete similar tasks. The 2-dimensional (planar) edge-mating task which aligns a peg normal to a fiat table served as the basis for the investigation. A simple multi-layered perceptron (MLP) ANN with a single hidden layer and linear output nodes was trained using the back-propagation algorithm with momentum. The inputs to the ANN were the planar components of the contact force between the peg and the table. The outputs from the ANN were the planar components of a commanded velocity. The controller was architected so the ANN could learn a configuration-independent solution by operating in the tool-frame coordinates. As a baseline of performance, a simple accommodation matrix capable of completing the edge- mating task was determined and implemented in simulation and on the PUMA manipulator. The accommodation matrix was also used to synthesize various forms of training data which were used to gain insights into the function and vulnerabilities of the proposed control scheme. Human demonstration data were collected using a gravity-compensated PUMA 562 manipulator and using a custom-built planar, low-impedance motion measurement system (PLIMMS)

    Ergodic Exploration using Tensor Train: Applications in Insertion Tasks

    Full text link
    By generating control policies that create natural search behaviors in autonomous systems, ergodic control provides a principled solution to address tasks that require exploration. A large class of ergodic control algorithms relies on spectral analysis, which suffers from the curse of dimensionality, both in storage and computation. This drawback has prohibited the application of ergodic control in robot manipulation since it often requires exploration in state space with more than 2 dimensions. Indeed, the original ergodic control formulation will typically not allow exploratory behaviors to be generated for a complete 6D end-effector pose. In this paper, we propose a solution for ergodic exploration based on the spectral analysis in multidimensional spaces using low-rank tensor approximation techniques. We rely on tensor train decomposition, a recent approach from multilinear algebra for low-rank approximation and efficient computation of multidimensional arrays. The proposed solution is efficient both computationally and storage-wise, hence making it suitable for its online implementation in robotic systems. The approach is applied to a peg-in-hole insertion task using a 7-axis Franka Emika Panda robot, where ergodic exploration allows the task to be achieved without requiring the use of force/torque sensors

    Human skill capturing and modelling using wearable devices

    Get PDF
    Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8Ëš to 6.4Ëš compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution
    • …
    corecore