1,161 research outputs found

    A framework for flexible integration in robotics and its applications for calibration and error compensation

    Get PDF
    Robotics has been considered as a viable automation solution for the aerospace industry to address manufacturing cost. Many of the existing robot systems augmented with guidance from a large volume metrology system have proved to meet the high dimensional accuracy requirements in aero-structure assembly. However, they have been mainly deployed as costly and dedicated systems, which might not be ideal for aerospace manufacturing having low production rate and long cycle time. The work described in this thesis is to provide technical solutions to improve the flexibility and cost-efficiency of such metrology-integrated robot systems. To address the flexibility, a software framework that supports reconfigurable system integration is developed. The framework provides a design methodology to compose distributed software components which can be integrated dynamically at runtime. This provides the potential for the automation devices (robots, metrology, actuators etc.) controlled by these software components to be assembled on demand for various assembly applications. To reduce the cost of deployment, this thesis proposes a two-stage error compensation scheme for industrial robots that requires only intermittent metrology input, thus allowing for one expensive metrology system to be used by a number of robots. Robot calibration is employed in the first stage to reduce the majority of robot inaccuracy then the metrology will correct the residual errors. In this work, a new calibration model for serial robots having a parallelogram linkage is developed that takes into account both geometric errors and joint deflections induced by link masses and weight of the end-effectors. Experiments are conducted to evaluate the two pieces of work presented above. The proposed framework is adopted to create a distributed control system that implements calibration and error compensation for a large industrial robot having a parallelogram linkage. The control system is formed by hot-plugging the control applications of the robot and metrology used together. Experimental results show that the developed error model was able to improve the 3 positional accuracy of the loaded robot from several millimetres to less than one millimetre and reduce half of the time previously required to correct the errors by using only the metrology. The experiments also demonstrate the capability of sharing one metrology system to more than one robot

    Geometrical Error Analysis and Correction in Robotic Grinding

    Get PDF
    The use of robots in industrial applications has been widespread in the manufacturing tasks such as welding, finishing, polishing and grinding. Most robotic grinding focus on the surface finish rather than accuracy and precision. Therefore, it is important to advance the technology of robotic machining so that more practical and competitive systems can be developed for components that have accuracy and precision requirement. This thesis focuses on improving the level of accuracy in robotic grinding which is a significant challenge in robotic applications because of the kinematic accuracy of the robot movement which is much more complex than normal CNC machine tools. Therefore, aiming to improve the robot accuracy, this work provides a novel method to define the geometrical error by using the cutting tool as a probe whilst using Acoustic Emission monitoring to modify robot commands and to detect surfaces of the workpiece. The work also includes an applicable mathematical model for compensating machining errors in relation to its geometrical position as well as applying an optimum grinding method to motivate the need of eliminating the residual error when performing abrasive grinding using the robot. The work has demonstrated an improved machining precision level from 50µm to 30µm which is controlled by considering the process influential variables, such as depth of cut, wheel speed, feed speed, dressing condition and system time constant. The recorded data and associated error reduction provide a significant evidence to support the viability of implementing a robotic system for various grinding applications, combining more quality and critical surface finishing practices, and an increased focus on the size and form of generated components. This method could provide more flexibility to help designers and manufacturers to control the final accuracy for machining a product using a robot system

    Enhanced Positioning Algorithm Using a Single Image in an LCD-Camera System by Mesh Elements' Recalculation and Angle Error Orientation

    Get PDF
    In this article, we present a method to position the tool in a micromachine system based on a camera-LCD screen positioning system that also provides information about angular deviations of the tool axis during its running. Both position and angular deviations are obtained by reducing a matrix of LEDs in the image to a single rectangle in the conical perspective that is treated by a photogrammetry method. This method computes the coordinates and orientation of the camera with respect to the fixed screen coordinate system. The used image consists of 5 × 5 lit LEDs, which are analyzed by the algorithm to determine a rectangle with known dimensions. The coordinates of the vertices of the rectangle in space are obtained by an inverse perspective computation from the image. The method presents a good approximation of the central point of the rectangle and provides the inclination of the workpiece with respect to the LCD screen reference system of coordinates. A test of the method is designed with the assistance of a Coordinate Measurement Machine (CMM) to check the accuracy of the positioning method. The performed test delivers a good accuracy in the position measurement of the designed method. A high dispersion in the angular deviation is detected, although the orientation of the inclination is appropriate in almost every case. This is due to the small values of the angles that makes the trigonometric function approximations very erratic. This method is a good starting point for the compensation of angular deviation in vision based micromachine tools, which is the principal source of errors in these operations and represents the main volume in the cost of machine elements’ parts.The authors want to thank the University Center of Defense at the Spanish Air Force Academy, MDE-UPCT, for financial suppor

    Parameter identification and model based control of direct drive robots

    Get PDF
    Imperial Users onl

    Robotic learning of force-based industrial manipulation tasks

    Get PDF
    Even with the rapid technological advancements, robots are still not the most comfortable machines to work with. Firstly, due to the separation of the robot and human workspace which imposes an additional financial burden. Secondly, due to the significant re-programming cost in case of changing products, especially in Small and Medium-sized Enterprises (SMEs). Therefore, there is a significant need to reduce the programming efforts required to enable robots to perform various tasks while sharing the same space with a human operator. Hence, the robot must be equipped with a cognitive and perceptual capabilities that facilitate human-robot interaction. Humans use their various senses to perform tasks such as vision, smell and taste. One sensethat plays a significant role in human activity is ’touch’ or ’force’. For example, holding a cup of tea, or making fine adjustments while inserting a key requires haptic information to achieve the task successfully. In all these examples, force and torque data are crucial for the successful completion of the activity. Also, this information implicitly conveys data about contact force, object stiffness, and many others. Hence, a deep understanding of the execution of such events can bridge the gap between humans and robots. This thesis is being directed to equip an industrial robot with the ability to deal with force perceptions and then learn force-based tasks using Learning from Demonstration (LfD).To learn force-based tasks using LfD, it is essential to extract task-relevant features from the force information. Then, knowledge must be extracted and encoded form the task-relevant features. Hence, the captured skills can be reproduced in a new scenario. In this thesis, these elements of LfD were achieved using different approaches based on the demonstrated task. In this thesis, four robotics problems were addressed using LfD framework. The first challenge was to filter out robots’ internal forces (irrelevant signals) using data-driven approach. The second robotics challenge was the recognition of the Contact State (CS) during assembly tasks. To tackle this challenge, a symbolic based approach was proposed, in which a force/torque signals; during demonstrated assembly, the task was encoded as a sequence of symbols. The third challenge was to learn a human-robot co-manipulation task based on LfD. In this case, an ensemble machine learning approach was proposed to capture such a skill. The last challenge in this thesis, was to learn an assembly task by demonstration with the presents of parts geometrical variation. Hence, a new learning approach based on Artificial Potential Field (APF) to learn a Peg-in-Hole (PiH) assembly task which includes no-contact and contact phases. To sum up, this thesis focuses on the use of data-driven approaches to learning force based task in an industrial context. Hence, different machine learning approaches were implemented, developed and evaluated in different scenarios. Then, the performance of these approaches was compared with mathematical modelling based approaches.</div

    Robot learning from demonstration of force-based manipulation tasks

    Get PDF
    One of the main challenges in Robotics is to develop robots that can interact with humans in a natural way, sharing the same dynamic and unstructured environments. Such an interaction may be aimed at assisting, helping or collaborating with a human user. To achieve this, the robot must be endowed with a cognitive system that allows it not only to learn new skills from its human partner, but also to refine or improve those already learned. In this context, learning from demonstration appears as a natural and userfriendly way to transfer knowledge from humans to robots. This dissertation addresses such a topic and its application to an unexplored field, namely force-based manipulation tasks learning. In this kind of scenarios, force signals can convey data about the stiffness of a given object, the inertial components acting on a tool, a desired force profile to be reached, etc. Therefore, if the user wants the robot to learn a manipulation skill successfully, it is essential that its cognitive system is able to deal with force perceptions. The first issue this thesis tackles is to extract the input information that is relevant for learning the task at hand, which is also known as the what to imitate? problem. Here, the proposed solution takes into consideration that the robot actions are a function of sensory signals, in other words the importance of each perception is assessed through its correlation with the robot movements. A Mutual Information analysis is used for selecting the most relevant inputs according to their influence on the output space. In this way, the robot can gather all the information coming from its sensory system, and the perception selection module proposed here automatically chooses the data the robot needs to learn a given task. Having selected the relevant input information for the task, it is necessary to represent the human demonstrations in a compact way, encoding the relevant characteristics of the data, for instance, sequential information, uncertainty, constraints, etc. This issue is the next problem addressed in this thesis. Here, a probabilistic learning framework based on hidden Markov models and Gaussian mixture regression is proposed for learning force-based manipulation skills. The outstanding features of such a framework are: (i) it is able to deal with the noise and uncertainty of force signals because of its probabilistic formulation, (ii) it exploits the sequential information embedded in the model for managing perceptual aliasing and time discrepancies, and (iii) it takes advantage of task variables to encode those force-based skills where the robot actions are modulated by an external parameter. Therefore, the resulting learning structure is able to robustly encode and reproduce different manipulation tasks. After, this thesis goes a step forward by proposing a novel whole framework for learning impedance-based behaviors from demonstrations. The key aspects here are that this new structure merges vision and force information for encoding the data compactly, and it allows the robot to have different behaviors by shaping its compliance level over the course of the task. This is achieved by a parametric probabilistic model, whose Gaussian components are the basis of a statistical dynamical system that governs the robot motion. From the force perceptions, the stiffness of the springs composing such a system are estimated, allowing the robot to shape its compliance. This approach permits to extend the learning paradigm to other fields different from the common trajectory following. The proposed frameworks are tested in three scenarios, namely, (a) the ball-in-box task, (b) drink pouring, and (c) a collaborative assembly, where the experimental results evidence the importance of using force perceptions as well as the usefulness and strengths of the methods

    A perturbation signal based data-driven Gaussian process regression model for in-process part quality prediction in robotic countersinking operations

    Get PDF
    A typical manufacturing process consists of a machining (material removal) process followed by an inspection system for the quality checks. Usually these checks are performed at the end of the process and they may also involve removing the part to a dedicated inspection area. This paper presents an innovative perturbation signal based data generation and machine learning approach to build a robust process model with uncertainty quantification. The model is to map the in-process signal features collected during machining with the post-process quality results obtained upon inspection of the finished product. In particular, a probabilistic framework based on Gaussian Process Regression (GPR) is applied to build the process model that accurately and reliably predicts key process quality indicators. Raw data provided by multiple sensors including accelerometers, power transducers and acoustic emissions is first collected and then processed to extract a large number of signal features from both time and frequency domains. A strategy for the selection of most relevant features is also explored in this work in order to reduce the input space dimension and achieve faster training times. The proposed GPR model was tested on a multi-robot countersinking application for monitoring of the machined countersink depths in composite aircraft components. Experimental results showed that the model can be used as a tool to predict the part quality through in-process sensory information, which in turn, helps to reduce the total inspection time by identifying the parts that would require further investigation

    A multi-step learning approach for in-process monitoring of depth-of-cuts in robotic countersinking operations

    Get PDF
    Robotic machining is a relatively new and promising technology that aims to substitute the conventional approach of Computer Numeric Control machine tools. Due to the low positional accuracy and variable stiffness of the industrial robots, the machining operations performed by robotic systems are subject to variations in the quality of the finished product. The main focus of this work is to provide a means of improving the performance of a robotic machining process by the use of in-process monitoring of key process variables that directly influence the quality of the machined part. To this end, an intelligent monitoring system is designed, which uses sensor signals collected during machining to predict the amount of errors that the robotic system introduces into the manufacturing process in terms of imperfections of the finished product. A multi-step learning procedure that allows training of process models to take place during normal operation of the process is proposed. Moreover, applying an iterative probabilistic approach, these models are able to estimate, given the current training dataset, whether the prediction is likely to be correct and further training data is requested if necessary. The proposed monitoring system was tested in a robotic countersinking experiment for the in-process prediction of the countersink depth-of-cut and the results showed good ability of the models to provide accurate and reliable predictions
    • …
    corecore