635 research outputs found

    Intelligent 3D seam tracking and adaptable weld process control for robotic TIG welding

    Get PDF
    Tungsten Inert Gas (TIG) welding is extensively used in aerospace applications, due to its unique ability to produce higher quality welds compared to other shielded arc welding types. However, most TIG welding is performed manually and has not achieved the levels of automation that other welding techniques have. This is mostly attributed to the lack of process knowledge and adaptability to complexities, such as mismatches due to part fit-up. Recent advances in automation have enabled the use of industrial robots for complex tasks that require intelligent decision making, predominantly through sensors. Applications such as TIG welding of aerospace components require tight tolerances and need intelligent decision making capability to accommodate any unexpected variation and to carry out welding of complex geometries. Such decision making procedures must be based on the feedback about the weld profile geometry. In this thesis, a real-time position based closed loop system was developed with a six axis industrial robot (KUKA KR 16) and a laser triangulation based sensor (Micro-Epsilon Scan control 2900-25). [Continues.

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Vision-assisted robotic finishing of friction stir-welded corner joints

    Get PDF
    One required process in the fabrication of large components is welding, after which there may be a need for machining to achieve final dimensions and uniform surfaces. Friction stir-welding (FSW) is a typical example after which a series of deburring and grinding operations are carried out. Currently, the majority of these operations are carried out either manually, by human workers, or on machine tools which results in bottlenecks in the process flows. This paper presents a robotic finishing system to automate the finishing of friction stir-welded parts with minimum human involvement. In a sequence, the system can scan and reconstruct the 3D model of the part, localise it in the robot frame and generate a suitable machining path accordingly, to remove the excess material from FSW without violating process constraints. Results of the cutting trials carried out for demonstration have shown that the developed system can consistently machine the corner joints of an industrial scale part to desired surface quality which is around 1.25 ÎĽm in, Ra, the arithmetic average of the surface roughness

    A sensor enabled robotic strategy for automated defect-free multi-pass high-integrity welding

    Get PDF
    High-integrity welds found in safety–critical industries require flaw-free joints, but automation is challenging due to low-volume, often-unique nature of the work, alongside high-uncertainty part-localisation. As such, robotic welding still requires tedious manually taught paths or offline approaches based on nominal Computer-Aided-Design (CAD). Optical and laser sensors are commonly deployed to provide online adjustment of pre-defined paths within controlled environments. This paper presents a sensor-driven approach for defect-free welding, based on the as-built joint geometry alongside the requirement for no-accurate part localisation or CAD knowledge. The approach a) autonomously localises the specimen in the scene without CAD requirement, b) adapts and generates accurate welding paths unique to the as-built workpiece and c) generates robot kinematics based on an external-control strategy. The proposed approach is validated through experiments of unconstrained placed joints, where the increased accuracy of the generated welding paths, with no common seam tracking, is validated with an average error of 0.12 mm, 0.4°. Coupling with a multi-pass welding framework, the deployment of fully automated robotic arc welding takes place for different configurations. Non-Destructive-Testing (NDT) in the form of Ultrasound-Testing (UT) inspection validates the repeatable and flaw-free nature of the sensory-driven approach, exploiting direct benefits in quality alongside reduced re-work

    Robotic weld groove scanning for large tubular T-joints using a line laser sensor

    Get PDF
    This paper presents a novel procedure for robotic scanning of weld grooves in large tubular T-joints. The procedure is designed to record the discrete weld groove scans using a commercially available line laser scanner which is attached to the robot end-effector. The advantage of the proposed algorithm is that it does not require any prior knowledge of the joint interface geometry, while only two initial scanning positions have to be specified. The position and orientation of the following scan are calculated using the data from two previous weld groove scans, so once initiated, the scanning process is fully autonomous. The procedure is a two-step algorithm consisting of the prediction and correction substeps, where the position and orientation of the sensor for the following scan are predicted and corrected. Such a procedure does not require frequent weld groove scanning for navigation along the groove. The performance of the proposed procedure is studied experimentally using an industrial-size T-joint specimen. Several cases of scanning motion parameters have been tested, and a discussion on the results is given.publishedVersio

    On Sensor-Controlled Robotized One-off Manufacturing

    Get PDF
    A semi-automatic task oriented system structure has been developed and tested on an arc welding application. In normal industrial robot programming, the path is created and the process is based upon the decided path. Here a process-oriented method is proposed instead. It is natural to focus on the process, since the path is in reality a result of process needs. Another benefit of choosing process focus, is that it automatically leads us into task oriented thoughts, which in turn can be split in sub-tasks, one for each part of the process with similar process-characteristics. By carefully choosing and encapsulating the information needed to execute a sub-task, this component can be re-used whenever the actual subtask occurs. By using virtual sensors and generic interfaces to robots and sensors, applications built upon the system design do not change between simulation and actual shop floor runs. The system allows a mix of real- and simulated components during simulation and run-time

    Sensor based real-time control of robots

    Get PDF
    • …
    corecore