1,886 research outputs found

    Machine-human Cooperative Control of Welding Process

    Get PDF
    An innovative auxiliary control system is developed to cooperate with an unskilled welder in a manual GTAW in order to obtain a consistent welding performance. In the proposed system, a novel mobile sensing system is developed to non-intrusively monitor a manual GTAW by measuring three-dimensional (3D) weld pool surface. Specifically, a miniature structured-light laser amounted on torch projects a dot matrix pattern on weld pool surface during the process; Reflected by the weld pool surface, the laser pattern is intercepted by and imaged on the helmet glass, and recorded by a compact camera on it. Deformed reflection pattern contains the geometry information of weld pool, thus is utilized to reconstruct its 33D surface. An innovative image processing algorithm and a reconstruction scheme have been developed for (3D) reconstruction. The real-time spatial relations of the torch and the helmet is formulated during welding. Two miniature wireless inertial measurement units (WIMU) are mounted on the torch and the helmet, respectively, to detect their rotation rates and accelerations. A quaternion based unscented Kalman filter (UKF) has been designed to estimate the helmet/torch orientations based on the data from the WIMUs. The distance between the torch and the helmet is measured using an extra structure-light low power laser pattern. Furthermore, human welder\u27s behavior in welding performance has been studied, e.g., a welder`s adjustments on welding current were modeled as response to characteristic parameters of the three-dimensional weld pool surface. This response model as a controller is implemented both automatic and manual gas tungsten arc welding process to maintain a consistent full penetration

    Robotic weld groove scanning for large tubular T-joints using a line laser sensor

    Get PDF
    This paper presents a novel procedure for robotic scanning of weld grooves in large tubular T-joints. The procedure is designed to record the discrete weld groove scans using a commercially available line laser scanner which is attached to the robot end-effector. The advantage of the proposed algorithm is that it does not require any prior knowledge of the joint interface geometry, while only two initial scanning positions have to be specified. The position and orientation of the following scan are calculated using the data from two previous weld groove scans, so once initiated, the scanning process is fully autonomous. The procedure is a two-step algorithm consisting of the prediction and correction substeps, where the position and orientation of the sensor for the following scan are predicted and corrected. Such a procedure does not require frequent weld groove scanning for navigation along the groove. The performance of the proposed procedure is studied experimentally using an industrial-size T-joint specimen. Several cases of scanning motion parameters have been tested, and a discussion on the results is given.publishedVersio

    Development of a real-time ultrasonic sensing system for automated and robotic welding

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The implementation of robotic technology into welding processes is made difficult by the inherent process variables of part location, fit up, orientation and repeatability. Considering these aspects, to ensure weld reproducibility consistency and quality, advanced adaptive control techniques are essential. These involve not only the development of adequate sensors for seam tracking and joint recognition but also developments of overall machines with a level of artificial intelligence sufficient for automated welding. The development of such a prototype system which utilizes a manipulator arm, ultrasonic sensors and a transistorised welding power source is outlined. This system incorporates three essential aspects. It locates and tracks the welding seam ensuring correct positioning of the welding head relatively to the joint preparation. Additionally, it monitors the joint profile of the molten weld pool and modifies the relevant heat input parameters ensuring consistent penetration, joint filling and acceptable weld bead shape. Finally, it makes use of both the above information to reconstruct three-dimensional images of the weld pool silhouettes providing in-process inspection capabilities of the welded joints. Welding process control strategies have been incorporated into the system based on quantitative relationships between input parameters and weld bead shape configuration allowing real-time decisions to be made during the process of welding, without the need for operation intervention.British Technology Group (BTG

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    WELD PENETRATION IDENTIFICATION BASED ON CONVOLUTIONAL NEURAL NETWORK

    Get PDF
    Weld joint penetration determination is the key factor in welding process control area. Not only has it directly affected the weld joint mechanical properties, like fatigue for example. It also requires much of human intelligence, which either complex modeling or rich of welding experience. Therefore, weld penetration status identification has become the obstacle for intelligent welding system. In this dissertation, an innovative method has been proposed to detect the weld joint penetration status using machine-learning algorithms. A GTAW welding system is firstly built. Project a dot-structured laser pattern onto the weld pool surface during welding process, the reflected laser pattern is captured which contains all the information about the penetration status. An experienced welder is able to determine weld penetration status just based on the reflected laser pattern. However, it is difficult to characterize the images to extract key information that used to determine penetration status. To overcome the challenges in finding right features and accurately processing images to extract key features using conventional machine vision algorithms, we propose using convolutional neural network (CNN) to automatically extract key features and determine penetration status. Data-label pairs are needed to train a CNN. Therefore, an image acquiring system is designed to collect reflected laser pattern and the image of work-piece backside. Data augmentation is performed to enlarge the training data size, which resulting in 270,000 training data, 45,000 validation data and 45,000 test data. A six-layer convolutional neural network (CNN) has been designed and trained using a revised mini-batch gradient descent optimizer. Final test accuracy is 90.7% and using a voting mechanism based on three consequent images further improve the prediction accuracy

    Passive Visual Sensing in Automatic Arc Welding

    Get PDF

    END-TO-END PREDICTION OF WELD PENETRATION IN REAL TIME BASED ON DEEP LEARNING

    Get PDF
    Welding is an important joining technique that has been automated/robotized. In automated/robotic welding applications, however, the parameters are preset and are not adaptively adjusted to overcome unpredicted disturbances, which cause these applications to not be able to meet the standards from welding/manufacturing industry in terms of quality, efficiency, and individuality. Combining information sensing and processing with traditional welding techniques is a significant step toward revolutionizing the welding industry. In practical welding, the weld penetration as measured by the back-side bead width is a critical factor when determining the integrity of the weld produced. However, the back-side bead width is difficult to be directly monitored during manufacturing because it occurs underneath the surface of the welded workpiece. Therefore, predicting back-side bead width based on conveniently sensible information from the welding process is a fundamental issue in intelligent welding. Traditional research methods involve an indirect process that includes defining and extracting key characteristic information from the sensed data and building a model to predict the target information from the characteristic information. Due to a lack of feature information, the cumulative error of the extracted information and the complex sensing process directly affect prediction accuracy and real-time performance. An end-to-end, data-driven prediction system is proposed to predict the weld penetration status from top-side images during welding. In this method, a passive-vision sensing system with two cameras to simultaneously monitor the top-side and back-bead information is developed. Then the weld joints are classified into three classes (i.e., under penetration, desirable penetration, and excessive penetration) according to the back-bead width. Taking the weld pool-arc images as inputs and corresponding penetration statuses as labels, an end-to-end convolutional neural network (CNN) is designed and trained so the features are automatically defined and extracted. In order to increase accuracy and training speed, a transfer learning approach based on a residual neural network (ResNet) is developed. This ResNet-based model is pre-trained on an ImageNet dataset to process a better feature-extracting ability, and its fully connected layers are modified based on our own dataset. Our experiments show that this transfer learning approach can decrease training time and improve performance. Furthermore, this study proposes that the present weld pool-arc image is fused with two previous images that were acquired 1/6s and 2/6s earlier. The fused single image thus reflects the dynamic welding phenomena, and prediction accuracy is significantly improved with the image-sequence data by fusing temporal information to the input layer of the CNN (early fusion). Due to the critical role of weld penetration and the negligible impact on system implementation, this method represents major progress in the field of weld-penetration monitoring and is expected to provide more significant improvements during welding using pulsed current where the process becomes highly dynamic

    Supporting robotic welding of aluminium with a laser line scanner-based trigger definition method

    Get PDF
    Automation and the use of robots for welding operations is an important research topic. Being able to automate and, thus, save time for setting up and using robotic welding for complex, large-scale structures made of reflective materials, such as aluminium, will provide clear economic and competitive advantages. However, challenges coming from the ability to accurately detect and calibrate the robot for a given physical workpiece in addition to noises, such as the reflections, make it hard to develop and demonstrate a feasible automation solution. This paper proposes combining laser line scanning technology with CAD-based analysis of a workpiece geometry to support the identification of relevant elements of the workpiece in the physical world and thus support welding operations. An extendable trigger definition method is proposed to identify features of interest in a workpiece. The method can potentially support the execution of welding sequences, which in our case can be represented as a sequence of triggers that have to be observed and followed at the robot runtime to weld the workpiece together.acceptedVersio
    corecore