2,449 research outputs found

    Integrated sensors for robotic laser welding

    Get PDF
    A welding head is under development with integrated sensory systems for robotic laser welding applications. Robotic laser welding requires sensory systems that are capable to accurately guide the welding head over a seam in three-dimensional space and provide information about the welding process as well as the quality of the welding result. In this paper the focus is on seam tracking. It is difficult to measure three-dimensional parameters of a ream during a robotic laser welding task, especially when sharp corners are present. The proposed sensory system is capable to provide the three dimensional parameters of a seam in one measurement and guide robots over sharp corners

    Sensor integration for robotic laser welding processes

    Get PDF
    The use of robotic laser welding is increasing among industrial applications, because of its ability to weld objects in three dimensions. Robotic laser welding involves three sub-processes: seam detection and tracking, welding process control, and weld seam inspection. Usually, for each sub-process, a separate sensory system is required. The use of separate sensory systems leads to heavy and bulky tools, in contrast to compact and light sensory systems that are needed to reach sufficient accuracy and accessibility. In the solution presented in this paper all three subprocesses are integrated in one compact multipurpose welding head. This multi-purpose tool is under development and consists of a laser welding head, with integrated sensors for seam detection and inspection, while also carrying interfaces for process control. It can provide the relative position of the tool and the work piece in three-dimensional space. Additionally, it can cope with the occurrence of sharp corners along a three-dimensional weld path, which are difficult to detect and weld with conventional equipment due to measurement errors and robot dynamics. In this paper the process of seam detection will be mainly elaborated

    Automated Fovea Detection Based on Unsupervised Retinal Vessel Segmentation Method

    Get PDF
    The Computer Assisted Diagnosis systems could save workloads and give objective diagnostic to ophthalmologists. At first level of automated screening of systems feature extraction is the fundamental step. One of these retinal features is the fovea. The fovea is a small fossa on the fundus, which is represented by a deep-red or red-brown color in color retinal images. By observing retinal images, it appears that the main vessels diverge from the optic nerve head and follow a specific course that can be geometrically modeled as a parabola, with a common vertex inside the optic nerve head and the fovea located along the apex of this parabola curve. Therefore, based on this assumption, the main retinal blood vessels are segmented and fitted to a parabolic model. With respect to the core vascular structure, we can thus detect fovea in the fundus images. For the vessel segmentation, our algorithm addresses the image locally where homogeneity of features is more likely to occur. The algorithm is composed of 4 steps: multi-overlapping windows, local Radon transform, vessel validation, and parabolic fitting. In order to extract blood vessels, sub-vessels should be extracted in local windows. The high contrast between blood vessels and image background in the images cause the vessels to be associated with peaks in the Radon space. The largest vessels, using a high threshold of the Radon transform, determines the main course or overall configuration of the blood vessels which when fitted to a parabola, leads to the future localization of the fovea. In effect, with an accurate fit, the fovea normally lies along the slope joining the vertex and the focus. The darkest region along this line is the indicative of the fovea. To evaluate our method, we used 220 fundus images from a rural database (MUMS-DB) and one public one (DRIVE). The results show that, among 20 images of the first public database (DRIVE) we detected fovea in 85% of them. Also for the MUMS-DB database among 200 images we detect fovea correctly in 83% on them
    • …
    corecore