123,611 research outputs found

    Method for large-scale structured-light system calibration

    Get PDF
    We propose a multi-stage calibration method for increasing the overall accuracy of a large-scale structured light system by leveraging the conventional stereo calibration approach using a pinhole model. We first calibrate the intrinsic parameters at a near distance and then the extrinsic parameters with a low-cost large-calibration target at the designed measurement distance. Finally, we estimate pixel-wise errors from standard stereo 3D reconstructions and determine the pixel-wise phase-to-coordinate relationships using low-order polynomials. The calibrated pixel-wise polynomial functions can be used for 3D reconstruction for a given pixel phase value. We experimentally demonstrated that our proposed method achieves high accuracy for a large volume: sub-millimeter within 1200(H) × 800 (V) × 1000(D) mm3

    Computational structured illumination for high-content fluorescent and phase microscopy

    Get PDF
    High-content biological microscopy targets high-resolution imaging across large fields-of-view (FOVs). Recent works have demonstrated that computational imaging can provide efficient solutions for high-content microscopy. Here, we use speckle structured illumination microscopy (SIM) as a robust and cost-effective solution for high-content fluorescence microscopy with simultaneous high-content quantitative phase (QP). This multi-modal compatibility is essential for studies requiring cross-correlative biological analysis. Our method uses laterally-translated Scotch tape to generate high-resolution speckle illumination patterns across a large FOV. Custom optimization algorithms then jointly reconstruct the sample's super-resolution fluorescent (incoherent) and QP (coherent) distributions, while digitally correcting for system imperfections such as unknown speckle illumination patterns, system aberrations and pattern translations. Beyond previous linear SIM works, we achieve resolution gains of 4x the objective's diffraction-limited native resolution, resulting in 700 nm fluorescence and 1.2 um QP resolution, across a FOV of 2x2.7 mm^2, giving a space-bandwidth product (SBP) of 60 megapixels

    Robust Intrinsic and Extrinsic Calibration of RGB-D Cameras

    Get PDF
    Color-depth cameras (RGB-D cameras) have become the primary sensors in most robotics systems, from service robotics to industrial robotics applications. Typical consumer-grade RGB-D cameras are provided with a coarse intrinsic and extrinsic calibration that generally does not meet the accuracy requirements needed by many robotics applications (e.g., highly accurate 3D environment reconstruction and mapping, high precision object recognition and localization, ...). In this paper, we propose a human-friendly, reliable and accurate calibration framework that enables to easily estimate both the intrinsic and extrinsic parameters of a general color-depth sensor couple. Our approach is based on a novel two components error model. This model unifies the error sources of RGB-D pairs based on different technologies, such as structured-light 3D cameras and time-of-flight cameras. Our method provides some important advantages compared to other state-of-the-art systems: it is general (i.e., well suited for different types of sensors), based on an easy and stable calibration protocol, provides a greater calibration accuracy, and has been implemented within the ROS robotics framework. We report detailed experimental validations and performance comparisons to support our statements

    MScMS-II: an innovative IR-based indoor coordinate measuring system for large-scale metrology applications

    No full text
    According to the current great interest concerning large-scale metrology applications in many different fields of manufacturing industry, technologies and techniques for dimensional measurement have recently shown a substantial improvement. Ease-of-use, logistic and economic issues, as well as metrological performance are assuming a more and more important role among system requirements. This paper describes the architecture and the working principles of a novel infrared (IR) optical-based system, designed to perform low-cost and easy indoor coordinate measurements of large-size objects. The system consists of a distributed network-based layout, whose modularity allows fitting differently sized and shaped working volumes by adequately increasing the number of sensing units. Differently from existing spatially distributed metrological instruments, the remote sensor devices are intended to provide embedded data elaboration capabilities, in order to share the overall computational load. The overall system functionalities, including distributed layout configuration, network self-calibration, 3D point localization, and measurement data elaboration, are discussed. A preliminary metrological characterization of system performance, based on experimental testing, is also presente

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU
    corecore