1,315 research outputs found

    Machine-human Cooperative Control of Welding Process

    Get PDF
    An innovative auxiliary control system is developed to cooperate with an unskilled welder in a manual GTAW in order to obtain a consistent welding performance. In the proposed system, a novel mobile sensing system is developed to non-intrusively monitor a manual GTAW by measuring three-dimensional (3D) weld pool surface. Specifically, a miniature structured-light laser amounted on torch projects a dot matrix pattern on weld pool surface during the process; Reflected by the weld pool surface, the laser pattern is intercepted by and imaged on the helmet glass, and recorded by a compact camera on it. Deformed reflection pattern contains the geometry information of weld pool, thus is utilized to reconstruct its 33D surface. An innovative image processing algorithm and a reconstruction scheme have been developed for (3D) reconstruction. The real-time spatial relations of the torch and the helmet is formulated during welding. Two miniature wireless inertial measurement units (WIMU) are mounted on the torch and the helmet, respectively, to detect their rotation rates and accelerations. A quaternion based unscented Kalman filter (UKF) has been designed to estimate the helmet/torch orientations based on the data from the WIMUs. The distance between the torch and the helmet is measured using an extra structure-light low power laser pattern. Furthermore, human welder\u27s behavior in welding performance has been studied, e.g., a welder`s adjustments on welding current were modeled as response to characteristic parameters of the three-dimensional weld pool surface. This response model as a controller is implemented both automatic and manual gas tungsten arc welding process to maintain a consistent full penetration

    A Tutorial on Learning Human Welder\u27s Behavior: Sensing, Modeling, and Control

    Get PDF
    Human welder\u27s experiences and skills are critical for producing quality welds in manual GTAW process. Learning human welder\u27s behavior can help develop next generation intelligent welding machines and train welders faster. In this tutorial paper, various aspects of mechanizing the welder\u27s intelligence are surveyed, including sensing of the weld pool, modeling of the welder\u27s adjustments and this model-based control approach. Specifically, different sensing methods of the weld pool are reviewed and a novel 3D vision-based sensing system developed at University of Kentucky is introduced. Characterization of the weld pool is performed and human intelligent model is constructed, including an extensive survey on modeling human dynamics and neuro-fuzzy techniques. Closed-loop control experiment results are presented to illustrate the robustness of the model-based intelligent controller despite welding speed disturbance. A foundation is thus established to explore the mechanism and transformation of human welder\u27s intelligence into robotic welding system. Finally future research directions in this field are presented

    Virtualized Welding Based Learning of Human Welder Behaviors for Intelligent Robotic Welding

    Get PDF
    Combining human welder (with intelligence and sensing versatility) and automated welding robots (with precision and consistency) can lead to next generation intelligent welding systems. In this dissertation intelligent welding robots are developed by process modeling / control method and learning the human welder behavior. Weld penetration and 3D weld pool surface are first accurately controlled for an automated Gas Tungsten Arc Welding (GTAW) machine. Closed-form model predictive control (MPC) algorithm is derived for real-time welding applications. Skilled welder response to 3D weld pool surface by adjusting the welding current is then modeled using Adaptive Neuro-Fuzzy Inference System (ANFIS), and compared to the novice welder. Automated welding experiments confirm the effectiveness of the proposed human response model. A virtualized welding system is then developed that enables transferring the human knowledge into a welding robot. The learning of human welder movement (i.e., welding speed) is first realized with Virtual Reality (VR) enhancement using iterative K-means based local ANFIS modeling. As a separate effort, the learning is performed without VR enhancement utilizing a fuzzy classifier to rank the data and only preserve the high ranking “correct” response. The trained supervised ANFIS model is transferred to the welding robot and the performance of the controller is examined. A fuzzy weighting based data fusion approach to combine multiple machine and human intelligent models is proposed. The data fusion model can outperform individual machine-based control algorithm and welder intelligence-based models (with and without VR enhancement). Finally a data-driven approach is proposed to model human welder adjustments in 3D (including welding speed, arc length, and torch orientations). Teleoperated training experiments are conducted in which a human welder tries to adjust the torch movements in 3D based on his observation on the real-time weld pool image feedback. The data is off-line rated by the welder and a welder rating system is synthesized. ANFIS model is then proposed to correlate the 3D weld pool characteristic parameters and welder’s torch movements. A foundation is thus established to rapidly extract human intelligence and transfer such intelligence into welding robots

    A New Insight on Phased Array Ultrasound Inspection in MIG/MAG Welding

    Get PDF
    Weldment inspection is a critical process in the metal industry. It is first conducted visually, then manually and finally using instrumental techniques such as ultrasound. We made one hundred metal inert/active gas (MIG/MAG) weldments on plates of naval steel S275JR+N with no defects, and inducing pores, slag intrusion and cracks. With the objective of the three-dimensional reconstruction of the welding defects, phased array ultrasound inspections were carried out. Error-free weldment probes were used to provide the noise level. The results can be summarized as follows. (i) The top view obtained from the phased array provided no conclusive information about the welding defects. The values of the echo amplitudes were about 70 mV for pores and cracks, and greater than 150 mV for slag intrusion, all of which showed great variability. (ii) The sectional data did not lie at the same depths and they needed to be interpolated. (iii) The interpolated sectional views, or C-scans, allowed the computation of top views at any depth, as well as the three-dimensional reconstruction of the defects. (iv) The use of the simplest tool, consisting of the frequency histogram and its statistical moments, was sufficient to classify the defects. The mean echo amplitudes were 33 mV for pores, 72.16 mV for slag intrusion and 43.19 mV for cracks, with standard deviations of 8.84 mV, 24.64 mV and 12.39 mV, respectively. These findings represent the first step in the automatic classification of welding defects.This research was funded by CEI.MAR Cadiz, 2020-PR003

    Advanced Comparison of Phased Array and X-rays in the Inspection of Metallic Welding

    Get PDF
    The most common nondestructive weld inspection technique is X-rays and, since a few years ago, the ultrasound-based phased array. Their comparison has been done from the top view of both, with the result that the phased array is much more efficient in discovering flaws. From the last studies of the authors, a welding flaw can be three-dimensionally reconstructed from the sectorial phased array information. The same methodology is applied to compare quantitatively X-rays and phased array on 15 metal inert/active (MIG/MAG) welding specimens covering pores, slag intrusion and cracks. The results can be summarized in the correlation of the top views and in the correlation profiles between the X-ray top-view and the reconstructed top-view at the depths from phased array in the weld. The maximum correlation is the depth when the flaw in the X-ray looks like that in the phased array records at some depth, leading to an effective quantitative comparison of X-rays and phased array

    Information-rich surface metrology

    Get PDF
    Information-rich metrology refers to the incorporation of any type of available information in the data acquisition and processing pipeline of a measurement process, in order to improve the efficiency and quality of the measurement. In this work, the information-rich metrology paradigm is explored as it is applied to the measurement and characterisation of surface topography. The advantages and challenges of introducing heterogeneous information sources in the surface characterisation pipeline are illustrated. Examples are provided about the incorporation of structured knowledge about a part nominal geometry, the manufacturing processes with their signature topographic features and set-up parameters, and the measurement instruments with their performance characteristics and behaviour in relation to the specific properties of the surfaces being measured. A wide array of surface metrology applications, ranging from product inspection, to surface classification, to defect identification and to the investigation of advanced manufacturing processes, is used to illustrate the information-rich paradigm

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    TOWARD INTELLIGENT WELDING BY BUILDING ITS DIGITAL TWIN

    Get PDF
    To meet the increasing requirements for production on individualization, efficiency and quality, traditional manufacturing processes are evolving to smart manufacturing with the support from the information technology advancements including cyber-physical systems (CPS), Internet of Things (IoT), big industrial data, and artificial intelligence (AI). The pre-requirement for integrating with these advanced information technologies is to digitalize manufacturing processes such that they can be analyzed, controlled, and interacted with other digitalized components. Digital twin is developed as a general framework to do that by building the digital replicas for the physical entities. This work takes welding manufacturing as the case study to accelerate its transition to intelligent welding by building its digital twin and contributes to digital twin in the following two aspects (1) increasing the information analysis and reasoning ability by integrating deep learning; (2) enhancing the human user operative ability to physical welding manufacturing via digital twins by integrating human-robot interaction (HRI). Firstly, a digital twin of pulsed gas tungsten arc welding (GTAW-P) is developed by integrating deep learning to offer the strong feature extraction and analysis ability. In such a system, the direct information including weld pool images, arc images, welding current and arc voltage is collected by cameras and arc sensors. The undirect information determining the welding quality, i.e., weld joint top-side bead width (TSBW) and back-side bead width (BSBW), is computed by a traditional image processing method and a deep convolutional neural network (CNN) respectively. Based on that, the weld joint geometrical size is controlled to meet the quality requirement in various welding conditions. In the meantime, this developed digital twin is visualized to offer a graphical user interface (GUI) to human users for their effective and intuitive perception to physical welding processes. Secondly, in order to enhance the human operative ability to the physical welding processes via digital twins, HRI is integrated taking virtual reality (VR) as the interface which could transmit the information bidirectionally i.e., transmitting the human commends to welding robots and visualizing the digital twin to human users. Six welders, skilled and unskilled, tested this system by completing the same welding job but demonstrate different patterns and resulted welding qualities. To differentiate their skill levels (skilled or unskilled) from their demonstrated operations, a data-driven approach, FFT-PCA-SVM as a combination of fast Fourier transform (FFT), principal component analysis (PCA), and support vector machine (SVM) is developed and demonstrates the 94.44% classification accuracy. The robots can also work as an assistant to help the human welders to complete the welding tasks by recognizing and executing the intended welding operations. This is done by a developed human intention recognition algorithm based on hidden Markov model (HMM) and the welding experiments show that developed robot-assisted welding can help to improve welding quality. To further take the advantages of the robots i.e., movement accuracy and stability, the role of the robot upgrades to be a collaborator from an assistant to complete a subtask independently i.e., torch weaving and automatic seam tracking in weaving GTAW. The other subtask i.e., welding torch moving along the weld seam is completed by the human users who can adjust the travel speed to control the heat input and ensure the good welding quality. By doing that, the advantages of humans (intelligence) and robots (accuracy and stability) are combined together under this human-robot collaboration framework. The developed digital twin for welding manufacturing helps to promote the next-generation intelligent welding and can be applied in other similar manufacturing processes easily after small modifications including painting, spraying and additive manufacturing

    Machine Learning for Camera-Based Monitoring of Laser Welding Processes

    Get PDF
    Der zunehmende Einsatz automatisierter Laserschweißprozesse stellt hohe Anforderungen an die Prozessüberwachung. Ziel ist es, eine hohe Fügequalität und eine frühestmögliche Fehlererkennung zu gewährleisten. Durch die Verwendung von Methoden des maschinellen Lernens können kostengünstigere und im Optimalfall bereits vorhandene Sensoren zur Überwachung des gesamten Prozesses eingesetzt werden. In dieser Arbeit werden Methoden aufgezeigt, die mit einer an der Fokussieroptik koaxial zum Laserstrahl integrierten Kamera eine Prozessüberwachung vor, während und nach dem Schweißprozess vornehmen. Zur Veranschaulichung der Methoden wird der Kontaktierungsprozess von Kupferdrähten zur Herstellung von Formspulenwicklungen verwendet. Die vorherige Prozessüberwachung umfasst eine durch ein faltendes neuronales Netz optimierte Bauteillagedetektion. Durch ei ne Formprüfung der detektierten Fügekomponenten können zudem vorverarbeitende Schritte überwacht und die Schweißung fehlerhafter Bauteile vermieden werden. Die prozessbegleitende Überwachung konzentriert sich auf die Erkennung von Spritzern, da diese als Indikator für einen instabilen Prozess dienen. Algorithmen des maschinellen Lernens führen eine semantische Segmentierung durch, die eine klare Unterscheidung zwischen Rauch, Prozesslicht und Materialauswurf ermöglicht. Die Qualitätsbewertung nach dem Prozess beinhaltet die Extraktion von Informationen über Größe und Form der Anbindungsfläche aus dem Kamerabild. Zudem wird ein Verfahren vorgeschlagen, welches anhand eines Kamerabildes mit Methoden des maschinellen Lernens die Höhendaten berechnet. Anhand der Höhenkarte wird eine regelbasierte Qualitätsbewertung der Schweißnähte durchgeführt. Bei allen Algorithmen wird die Integrierbarkeit in industrielle Prozesse berücksichtigt. Hierzu zählen unter anderem eine geringe Datengrundlage, eine begrenzte Inferenzhardware aus der industriellen Fertigung und die Akzeptanz beim Anwender
    corecore