322,451 research outputs found

    Cylindrical Lenses Based Spectral Domain Low-Coherence Interferometry for On-line Surface Inspection

    Get PDF
    This paper presents a spectral domain low-coherence interferometry (SD-LCI) method that is effective for applications in on-line surface inspection because it can obtain a surface profile in a single shot. It has an advantage over existing spectral interferometry techniques because it uses cylindrical lenses as the objective lens in a Michelson interferometric configuration to enable the measurement of long profiles. The adjustable profile length in our experimental setup, determined by the NA of the illuminating system and the aperture of cylindrical lenses, is up to 10 mm. To simulate real-time surface inspection, large-scale 3D surface measurement was carried out by translating the tested sample during the measurement procedure. Two step height surfaces were measured and the captured interferograms were analysed using a fast Fourier transform algorithm. Both 2D profile results and 3D surface maps closely align with the calibrated specifications given by the manufacturer

    Real-time 3D surface-shape measurement using fringe projection and system-geometry constraints

    Get PDF
    Optical three-dimensional (3D) surface-shape measurement has diverse applications in engineering, computer vision and medical science. Fringe projection profilometry (FPP), uses a camera-projector system to permit high-accuracy full-field 3D surface-shape measurement by projecting fringe patterns onto an object surface, capturing images of the deformed patterns, and computing the 3D surface geometry. A wrapped phase map can be computed from the camera images by phase analysis techniques. Phase-unwrapping can solve the phase ambiguity of the wrapped phase map and permit determination of camera-projector correspondences. The object surface geometry can then be reconstructed by stereovision techniques after system calibration. For real-time 3D measurement, geometry-constraint based methods may be a preferred technique over other phase-unwrapping methods, since geometry-constraint methods can handle surface discontinuities, which are problematic for spatial phase unwrapping, and they do not require additional patterns, which are needed in temporal phase unwrapping. However, the fringe patterns used in geometry-constraint based methods are usually designed with a low frequency in order to maximize the reliability of correspondence determination. Although using high-frequency fringe patterns have proven to be effective in increasing the measurement accuracy by suppressing the phase error, high-frequency fringe patterns may reduce the reliability and thus are not commonly used. To address the limitations of current geometry-constraint based methods, a new fringe projection method for surface-shape measurement was developed using modulation of background and amplitude intensities of the fringe patterns to permit identification of the fringe order, and thus unwrap the phase, for high-frequency fringe patterns. Another method was developed with background modulation only, using four high-frequency phase-shifted fringe patterns. The pattern frequency is determined using a new fringe-wavelength geometry-constraint model that allows only two point candidates in the measurement volume. The correct corresponding point is selected with high reliability using a binary pattern computed from the background intensity. Equations of geometry-constraint parameters permit parameter calculation prior to measurement, thus reducing computational cost during measurement. In a further development, a new real-time 3D measurement method was devised using new background-modulated modified Fourier transform profilometry (FTP) fringe patterns and geometry constraints. The new method reduced the number of fringe patterns required for 3D surface reconstruction to two. A short camera-projector baseline allows reliable corresponding-point selection, even with high-frequency fringe patterns, and a new calibration approach reduces error induced by the short baseline. Experiments demonstrated the ability of the methods to perform real-time 3D measurement for a surface with geometric discontinuity, and for spatially isolated objects. Although multi-image FPP techniques can achieve higher accuracy than single-image methods, they suffer from motion artifacts when measuring dynamic object surfaces that are either moving or deforming. To reduce the motion-induced measurement error for multi-image FPP techniques, a new method was developed to first estimate the motion-induced phase shift errors by computing the differences between phase maps over a multiple measurement sequence. Then, a phase map with reduced motion-induced error is computed using the estimated phase shift errors. This motion-induced error compensation is computed pixel-wise for non-homogeneous surface motion. Experiments demonstrated the ability of the method to reduce motion-induced error in real-time, for real-time shape measurement of surfaces with high depth variation, and moving and deforming surfaces

    A New Sensor System for Accurate 3D Surface Measurements and Modeling of Underwater Objects

    Get PDF
    Featured Application A potential application of the work is the underwater 3D inspection of industrial structures, such as oil and gas pipelines, offshore wind turbine foundations, or anchor chains. Abstract A new underwater 3D scanning device based on structured illumination and designed for continuous capture of object data in motion for deep sea inspection applications is introduced. The sensor permanently captures 3D data of the inspected surface and generates a 3D surface model in real time. Sensor velocities up to 0.7 m/s are directly compensated while capturing camera images for the 3D reconstruction pipeline. The accuracy results of static measurements of special specimens in a water basin with clear water show the high accuracy potential of the scanner in the sub-millimeter range. Measurement examples with a moving sensor show the significance of the proposed motion compensation and the ability to generate a 3D model by merging individual scans. Future application tests in offshore environments will show the practical potential of the sensor for the desired inspection tasks

    Aplikasi Fotogrametri Jarak Dekat Untuk Pemodelan 3d Tugu Muda Semarang

    Full text link
    The earth\u27s surface object mapping is largely a three dimensional (3D) object, therefore at this time has began to take a three dimensional map. The basic data used to perform modeling three dimensional objects must have a good level of precision and good geometry precision .In this research, the methods used are the close range photogrametry method for modeling 3D of Tugu Muda using digital cameras non metric. For this phase of the research is divided into stages cameras calibration, object photo shoot, 3D model processing. For the calibration process obtained 80% qualified calibration.For the real photo data capture as many as 96 images and data processing on this research using PhotoModeler Scanner and Summit Evolution a comparative statistical test geometric with Electronic Total Station. The Modeling stage consists of Automated Project, process counts and the creation of models, 3D coordinate transformations, 3D model visualization and 24 geometric points statistical analysis.The final results in this research are 3D model Monument of Tugu Muda Semarang. Testing of the results in 3D modelling processing was done by comparing the 3D model distance referenced to Electronic Total Station measurement. The comparison of the standard deviation value with the Electronic Total Station measurement is 0,101 meters

    Efficient 3D Segmentation, Registration and Mapping for Mobile Robots

    Get PDF
    Sometimes simple is better! For certain situations and tasks, simple but robust methods can achieve the same or better results in the same or less time than related sophisticated approaches. In the context of robots operating in real-world environments, key challenges are perceiving objects of interest and obstacles as well as building maps of the environment and localizing therein. The goal of this thesis is to carefully analyze such problem formulations, to deduce valid assumptions and simplifications, and to develop simple solutions that are both robust and fast. All approaches make use of sensors capturing 3D information, such as consumer RGBD cameras. Comparative evaluations show the performance of the developed approaches. For identifying objects and regions of interest in manipulation tasks, a real-time object segmentation pipeline is proposed. It exploits several common assumptions of manipulation tasks such as objects being on horizontal support surfaces (and well separated). It achieves real-time performance by using particularly efficient approximations in the individual processing steps, subsampling the input data where possible, and processing only relevant subsets of the data. The resulting pipeline segments 3D input data with up to 30Hz. In order to obtain complete segmentations of the 3D input data, a second pipeline is proposed that approximates the sampled surface, smooths the underlying data, and segments the smoothed surface into coherent regions belonging to the same geometric primitive. It uses different primitive models and can reliably segment input data into planes, cylinders and spheres. A thorough comparative evaluation shows state-of-the-art performance while computing such segmentations in near real-time. The second part of the thesis addresses the registration of 3D input data, i.e., consistently aligning input captured from different view poses. Several methods are presented for different types of input data. For the particular application of mapping with micro aerial vehicles where the 3D input data is particularly sparse, a pipeline is proposed that uses the same approximate surface reconstruction to exploit the measurement topology and a surface-to-surface registration algorithm that robustly aligns the data. Optimization of the resulting graph of determined view poses then yields globally consistent 3D maps. For sequences of RGBD data this pipeline is extended to include additional subsampling steps and an initial alignment of the data in local windows in the pose graph. In both cases, comparative evaluations show a robust and fast alignment of the input data

    The Surface Edge Explorer (SEE): A measurement-direct approach to next best view planning

    Full text link
    High-quality observations of the real world are crucial for a variety of applications, including producing 3D printed replicas of small-scale scenes and conducting inspections of large-scale infrastructure. These 3D observations are commonly obtained by combining multiple sensor measurements from different views. Guiding the selection of suitable views is known as the NBV planning problem. Most NBV approaches reason about measurements using rigid data structures (e.g., surface meshes or voxel grids). This simplifies next best view selection but can be computationally expensive, reduces real-world fidelity, and couples the selection of a next best view with the final data processing. This paper presents the Surface Edge Explorer, a NBV approach that selects new observations directly from previous sensor measurements without requiring rigid data structures. SEE uses measurement density to propose next best views that increase coverage of insufficiently observed surfaces while avoiding potential occlusions. Statistical results from simulated experiments show that SEE can attain similar or better surface coverage with less observation time and travel distance than evaluated volumetric approaches on both small- and large-scale scenes. Real-world experiments demonstrate SEE autonomously observing a deer statue using a 3D sensor affixed to a robotic arm.Comment: Under review for the International Journal of Robotics Research (IJRR), Manuscript #IJR-22-4541. 25 pages, 17 figures, 6 tables. Videos available at https://www.youtube.com/watch?v=dqppqRlaGEA and https://www.youtube.com/playlist?list=PLbaQBz4TuPcyNh4COoaCtC1ZGhpbEkFE
    corecore