87,928 research outputs found

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    Review of the mathematical foundations of data fusion techniques in surface metrology

    Get PDF
    The recent proliferation of engineered surfaces, including freeform and structured surfaces, is challenging current metrology techniques. Measurement using multiple sensors has been proposed to achieve enhanced benefits, mainly in terms of spatial frequency bandwidth, which a single sensor cannot provide. When using data from different sensors, a process of data fusion is required and there is much active research in this area. In this paper, current data fusion methods and applications are reviewed, with a focus on the mathematical foundations of the subject. Common research questions in the fusion of surface metrology data are raised and potential fusion algorithms are discussed

    Integrated sensors for robotic laser welding

    Get PDF
    A welding head is under development with integrated sensory systems for robotic laser welding applications. Robotic laser welding requires sensory systems that are capable to accurately guide the welding head over a seam in three-dimensional space and provide information about the welding process as well as the quality of the welding result. In this paper the focus is on seam tracking. It is difficult to measure three-dimensional parameters of a ream during a robotic laser welding task, especially when sharp corners are present. The proposed sensory system is capable to provide the three dimensional parameters of a seam in one measurement and guide robots over sharp corners

    Sensor integration for robotic laser welding processes

    Get PDF
    The use of robotic laser welding is increasing among industrial applications, because of its ability to weld objects in three dimensions. Robotic laser welding involves three sub-processes: seam detection and tracking, welding process control, and weld seam inspection. Usually, for each sub-process, a separate sensory system is required. The use of separate sensory systems leads to heavy and bulky tools, in contrast to compact and light sensory systems that are needed to reach sufficient accuracy and accessibility. In the solution presented in this paper all three subprocesses are integrated in one compact multipurpose welding head. This multi-purpose tool is under development and consists of a laser welding head, with integrated sensors for seam detection and inspection, while also carrying interfaces for process control. It can provide the relative position of the tool and the work piece in three-dimensional space. Additionally, it can cope with the occurrence of sharp corners along a three-dimensional weld path, which are difficult to detect and weld with conventional equipment due to measurement errors and robot dynamics. In this paper the process of seam detection will be mainly elaborated

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    A Feasibility Study on the Use of a Structured Light Depth-Camera for Three-Dimensional Body Measurements of Dairy Cows in Free-Stall Barns

    Get PDF
    Frequent checks on livestock\u2019s body growth can help reducing problems related to cow infertility or other welfare implications, and recognizing health\u2019s anomalies. In the last ten years, optical methods have been proposed to extract information on various parameters while avoiding direct contact with animals\u2019 body, generally causes stress. This research aims to evaluate a new monitoring system, which is suitable to frequently check calves and cow\u2019s growth through a three-dimensional analysis of their bodies\u2019 portions. The innovative system is based on multiple acquisitions from a low cost Structured Light Depth-Camera (Microsoft Kinect\u2122 v1). The metrological performance of the instrument is proved through an uncertainty analysis and a proper calibration procedure. The paper reports application of the depth camera for extraction of different body parameters. Expanded uncertainty ranging between 3 and 15 mm is reported in the case of ten repeated measurements. Coef\ufb01cients of determination R2> 0.84 and deviations lower than 6% from manual measurements where in general detected in the case of head size, hips distance, withers to tail length, chest girth, hips, and withers height. Conversely, lower performances where recognized in the case of animal depth (R2 = 0.74) and back slope (R2 = 0.12)

    Computational structured illumination for high-content fluorescent and phase microscopy

    Get PDF
    High-content biological microscopy targets high-resolution imaging across large fields-of-view (FOVs). Recent works have demonstrated that computational imaging can provide efficient solutions for high-content microscopy. Here, we use speckle structured illumination microscopy (SIM) as a robust and cost-effective solution for high-content fluorescence microscopy with simultaneous high-content quantitative phase (QP). This multi-modal compatibility is essential for studies requiring cross-correlative biological analysis. Our method uses laterally-translated Scotch tape to generate high-resolution speckle illumination patterns across a large FOV. Custom optimization algorithms then jointly reconstruct the sample's super-resolution fluorescent (incoherent) and QP (coherent) distributions, while digitally correcting for system imperfections such as unknown speckle illumination patterns, system aberrations and pattern translations. Beyond previous linear SIM works, we achieve resolution gains of 4x the objective's diffraction-limited native resolution, resulting in 700 nm fluorescence and 1.2 um QP resolution, across a FOV of 2x2.7 mm^2, giving a space-bandwidth product (SBP) of 60 megapixels
    • …
    corecore