1,586 research outputs found

    Real Time Structured Light and Applications

    Get PDF

    On Creating Reference Data for Performance Analysis in Image Processing

    Get PDF
    This thesis investigates methods for the creation of reference datasets for image processing, especially for the dense correspondence problem. Three types of reference data can be identified: Real datasets with dense ground truth, real datasets with sparse or missing ground truth and synthetic datasets. For the creation of real datasets with ground truth a existing method based on depth map fusion was evaluated. The described method is especially suited for creating large amounts of reference data with known accuracy. The creation of reference datasets with missing ground truth was examined on the example of multiple datasets for the automotive industry. The data was used succesfully for verification and evaluation by multiple image processing projects. Finally, it was investigated how methods from computer graphics can be used for creating synthetic reference datasets. Especially the creation of photorealistic image sequences using global illumination has been examined for the task of evaluating algorithms. The results show that while such sequences can be used for evaluation, their creation is hindered by practicallity problems. As an application example, a new simulation method for Time-of-Flight depth cameras which can simulate all relevant error sources of these systems was developed

    Acquisition, Modeling, and Augmentation of Reflectance for Synthetic Optical Flow Reference Data

    Get PDF
    This thesis is concerned with the acquisition, modeling, and augmentation of material reflectance to simulate high-fidelity synthetic data for computer vision tasks. The topic is covered in three chapters: I commence with exploring the upper limits of reflectance acquisition. I analyze state-of-the-art BTF reflectance field renderings and show that they can be applied to optical flow performance analysis with closely matching performance to real-world images. Next, I present two methods for fitting efficient BRDF reflectance models to measured BTF data. Both methods combined retain all relevant reflectance information as well as the surface normal details on a pixel level. I further show that the resulting synthesized images are suited for optical flow performance analysis, with a virtually identical performance for all material types. Finally, I present a novel method for augmenting real-world datasets with physically plausible precipitation effects, including ground surface wetting, water droplets on the windshield, and water spray and mists. This is achieved by projecting the realworld image data onto a reconstructed virtual scene, manipulating the scene and the surface reflectance, and performing unbiased light transport simulation of the precipitation effects

    High Speed, Micron Precision Scanning Technology for 3D Printing Applications

    Get PDF
    Modern 3D printing technology is becoming a more viable option for use in industrial manufacturing. As the speed and precision of rapid prototyping technology improves, so too must the 3D scanning and verification technology. Current 3D scanning technology (such as CT Scanners) produce the resolution needed for micron precision inspection. However, the method lacks in speed. Some scans can be multiple gigabytes in size taking several minutes to acquire and process. Especially in high volume manufacturing of 3D printed parts, such delays prohibit the widespread adaptation of 3D scanning technology for quality control. The limiting factors of current technology boil down to computational and processing power along with available sensor resolution and operational frequency. Realizing a 3D scanning system that produces micron precision results within a single minute promises to revolutionize the quality control industry. The specific 3D scanning method considered in this thesis utilizes a line profile triangulation sensor with high operational frequency, and a high-precision mechanical actuation apparatus for controlling the scan. By syncing the operational frequency of the sensor to the actuation velocity of the apparatus, a 3D point cloud is rapidly acquired. Processing of the data is then performed using MATLAB on contemporary computing hardware, which includes proper point cloud formatting and implementation of the Iterative Closest Point (ICP) algorithm for point cloud stitching. Theoretical and physical experiments are performed to demonstrate the validity of the method. The prototyped system is shown to produce multiple loosely-registered micron precision point clouds of a 3D printed object that are then stitched together to form a full point cloud representative of the original part. This prototype produces micron precision results in approximately 130 seconds, but the experiments illuminate upon the additional investments by which this time could be further reduced to approach the revolutionizing one-minute milestone

    NeRRF: 3D Reconstruction and View Synthesis for Transparent and Specular Objects with Neural Refractive-Reflective Fields

    Full text link
    Neural radiance fields (NeRF) have revolutionized the field of image-based view synthesis. However, NeRF uses straight rays and fails to deal with complicated light path changes caused by refraction and reflection. This prevents NeRF from successfully synthesizing transparent or specular objects, which are ubiquitous in real-world robotics and A/VR applications. In this paper, we introduce the refractive-reflective field. Taking the object silhouette as input, we first utilize marching tetrahedra with a progressive encoding to reconstruct the geometry of non-Lambertian objects and then model refraction and reflection effects of the object in a unified framework using Fresnel terms. Meanwhile, to achieve efficient and effective anti-aliasing, we propose a virtual cone supersampling technique. We benchmark our method on different shapes, backgrounds and Fresnel terms on both real-world and synthetic datasets. We also qualitatively and quantitatively benchmark the rendering results of various editing applications, including material editing, object replacement/insertion, and environment illumination estimation. Codes and data are publicly available at https://github.com/dawning77/NeRRF

    Autonomous Optical Inspection of Large Scale Freeform Surfaces

    Get PDF
    • …
    corecore