30 research outputs found

    Improving SLI Performance in Optically Challenging Environments

    Get PDF
    The construction of 3D models of real-world scenes using non-contact methods is an important problem in computer vision. Some of the more successful methods belong to a class of techniques called structured light illumination (SLI). While SLI methods are generally very successful, there are cases where their performance is poor. Examples include scenes with a high dynamic range in albedo or scenes with strong interreflections. These scenes are referred to as optically challenging environments. The work in this dissertation is aimed at improving SLI performance in optically challenging environments. A new method of high dynamic range imaging (HDRI) based on pixel-by-pixel Kalman filtering is developed. Using objective metrics, it is show to achieve as much as a 9.4 dB improvement in signal-to-noise ratio and as much as a 29% improvement in radiometric accuracy over a classic method. Quality checks are developed to detect and quantify multipath interference and other quality defects using phase measuring profilometry (PMP). Techniques are established to improve SLI performance in the presence of strong interreflections. Approaches in compressed sensing are applied to SLI, and interreflections in a scene are modeled using SLI. Several different applications of this research are also discussed

    DeepToF: Off-the-shelf real-time correction of multipath interference in time-of-flight imaging

    Get PDF
    Time-of-flight (ToF) imaging has become a widespread technique for depth estimation, allowing affordable off-the-shelf cameras to provide depth maps in real time. However, multipath interference (MPI) resulting from indirect illumination significantly degrades the captured depth. Most previous works have tried to solve this problem by means of complex hardware modifications or costly computations. In this work, we avoid these approaches and propose a new technique to correct errors in depth caused by MPI, which requires no camera modifications and takes just 10 milliseconds per frame. Our observations about the nature of MPI suggest that most of its information is available in image space; this allows us to formulate the depth imaging process as a spatially-varying convolution and use a convolutional neural network to correct MPI errors. Since the input and output data present similar structure, we base our network on an autoencoder, which we train in two stages. First, we use the encoder (convolution filters) to learn a suitable basis to represent MPI-corrupted depth images; then, we train the decoder (deconvolution filters) to correct depth from synthetic scenes, generated by using a physically-based, time-resolved renderer. This approach allows us to tackle a key problem in ToF, the lack of ground-truth data, by using a large-scale captured training set with MPI-corrupted depth to train the encoder, and a smaller synthetic training set with ground truth depth to train the decoder stage of the network. We demonstrate and validate our method on both synthetic and real complex scenarios, using an off-the-shelf ToF camera, and with only the captured, incorrect depth as input

    Shape recovery from reflection.

    Get PDF
    by Yingli Tian.Thesis (Ph.D.)--Chinese University of Hong Kong, 1996.Includes bibliographical references (leaves 202-222).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Physics-Based Shape Recovery Techniques --- p.3Chapter 1.2 --- Proposed Approaches to Shape Recovery in this Thesis --- p.9Chapter 1.3 --- Thesis Outline --- p.13Chapter 2 --- Camera Model in Color Vision --- p.15Chapter 2.1 --- Introduction --- p.15Chapter 2.2 --- Spectral Linearization --- p.17Chapter 2.3 --- Image Balancing --- p.21Chapter 2.4 --- Spectral Sensitivity --- p.24Chapter 2.5 --- Color Clipping and Blooming --- p.24Chapter 3 --- Extended Light Source Models --- p.27Chapter 3.1 --- Introduction --- p.27Chapter 3.2 --- A Spherical Light Model in 2D Coordinate System --- p.30Chapter 3.2.1 --- Basic Photometric Function for Hybrid Surfaces under a Point Light Source --- p.32Chapter 3.2.2 --- Photometric Function for Hybrid Surfaces under the Spher- ical Light Source --- p.34Chapter 3.3 --- A Spherical Light Model in 3D Coordinate System --- p.36Chapter 3.3.1 --- Radiance of the Spherical Light Source --- p.36Chapter 3.3.2 --- Surface Brightness Illuminated by One Point of the Spher- ical Light Source --- p.38Chapter 3.3.3 --- Surface Brightness Illuminated by the Spherical Light Source --- p.39Chapter 3.3.4 --- Rotating the Source-Object Coordinate to the Camera- Object Coordinate --- p.41Chapter 3.3.5 --- Surface Reflection Model --- p.44Chapter 3.4 --- Rectangular Light Model in 3D Coordinate System --- p.45Chapter 3.4.1 --- Radiance of a Rectangular Light Source --- p.45Chapter 3.4.2 --- Surface Brightness Illuminated by One Point of the Rect- angular Light Source --- p.47Chapter 3.4.3 --- Surface Brightness Illuminated by a Rectangular Light Source --- p.47Chapter 4 --- Shape Recovery from Specular Reflection --- p.54Chapter 4.1 --- Introduction --- p.54Chapter 4.2 --- Theory of the First Method --- p.57Chapter 4.2.1 --- Torrance-Sparrow Reflectance Model --- p.57Chapter 4.2.2 --- Relationship Between Surface Shapes from Different Images --- p.60Chapter 4.3 --- Theory of the Second Method --- p.65Chapter 4.3.1 --- Getting the Depth of a Reference Point --- p.65Chapter 4.3.2 --- Recovering the Depth and Normal of a Specular Point Near the Reference Point --- p.67Chapter 4.3.3 --- Recovering Local Shape of the Object by Specular Reflection --- p.69Chapter 4.4 --- Experimental Results and Discussions --- p.71Chapter 4.4.1 --- Experimental System and Results of the First Method --- p.71Chapter 4.4.2 --- Experimental System and Results of the Second Method --- p.76Chapter 5 --- Shape Recovery from One Sequence of Color Images --- p.81Chapter 5.1 --- Introduction --- p.81Chapter 5.2 --- Temporal-color Space Analysis of Reflection --- p.84Chapter 5.3 --- Estimation of Illuminant Color Ks --- p.88Chapter 5.4 --- Estimation of the Color Vector of the Body-reflection Component Kl --- p.89Chapter 5.5 --- Separating Specular and Body Reflection Components and Re- covering Surface Shape and Reflectance --- p.91Chapter 5.6 --- Experiment Results and Discussions --- p.92Chapter 5.6.1 --- Results with Interreflection --- p.93Chapter 5.6.2 --- Results Without Interreflection --- p.93Chapter 5.6.3 --- Simulation Results --- p.95Chapter 5.7 --- Analysis of Various Factors on the Accuracy --- p.96Chapter 5.7.1 --- Effects of Number of Samples --- p.96Chapter 5.7.2 --- Effects of Noise --- p.99Chapter 5.7.3 --- Effects of Object Size --- p.99Chapter 5.7.4 --- Camera Optical Axis Not in Light Source Plane --- p.102Chapter 5.7.5 --- Camera Optical Axis Not Passing Through Object Center --- p.105Chapter 6 --- Shape Recovery from Two Sequences of Images --- p.107Chapter 6.1 --- Introduction --- p.107Chapter 6.2 --- Method for 3D Shape Recovery from Two Sequences of Images --- p.109Chapter 6.3 --- Genetics-Based Method --- p.111Chapter 6.4 --- Experimental Results and Discussions --- p.115Chapter 6.4.1 --- Simulation Results --- p.115Chapter 6.4.2 --- Real Experimental Results --- p.118Chapter 7 --- Shape from Shading for Non-Lambertian Surfaces --- p.120Chapter 7.1 --- Introduction --- p.120Chapter 7.2 --- Reflectance Map for Non-Lambertian Color Surfaces --- p.123Chapter 7.3 --- Recovering Non-Lambertian Surface Shape from One Color Image --- p.127Chapter 7.3.1 --- Segmenting Hybrid Areas from Diffuse Areas Using Hue Information --- p.127Chapter 7.3.2 --- Calculating Intensities of Specular and Diffuse Compo- nents on Hybrid Areas --- p.128Chapter 7.3.3 --- Recovering Shape from Shading --- p.129Chapter 7.4 --- Experimental Results and Discussions --- p.131Chapter 7.4.1 --- Simulation Results --- p.131Chapter 7.4.2 --- Real Experimental Results --- p.136Chapter 8 --- Shape from Shading under Multiple Extended Light Sources --- p.142Chapter 8.1 --- Introduction --- p.142Chapter 8.2 --- Reflectance Map for Lambertian Surface Under Multiple Rectan- gular Light Sources --- p.144Chapter 8.3 --- Recovering Surface Shape Under Multiple Rectangular Light Sources --- p.148Chapter 8.4 --- Experimental Results and Discussions --- p.150Chapter 8.4.1 --- Synthetic Image Results --- p.150Chapter 8.4.2 --- Real Image Results --- p.152Chapter 9 --- Shape from Shading in Unknown Environments by Neural Net- works --- p.167Chapter 9.1 --- Introduction --- p.167Chapter 9.2 --- Shape Estimation --- p.169Chapter 9.2.1 --- Shape Recovery Problem under Multiple Rectangular Ex- tended Light Sources --- p.169Chapter 9.2.2 --- Forward Network Representation of Surface Normals --- p.170Chapter 9.2.3 --- Shape Estimation --- p.174Chapter 9.3 --- Application of the Neural Network in Shape Recovery --- p.174Chapter 9.3.1 --- Structure of the Neural Network --- p.174Chapter 9.3.2 --- Normalization of the Input and Output Patterns --- p.175Chapter 9.4 --- Experimental Results and Discussions --- p.178Chapter 9.4.1 --- Results for Lambertian Surface under One Rectangular Light --- p.178Chapter 9.4.2 --- Results for Lambertian Surface under Four Rectangular Light Sources --- p.180Chapter 9.4.3 --- Results for Hybrid Surface under One Rectangular Light Sources --- p.190Chapter 9.4.4 --- Discussions --- p.190Chapter 10 --- Summary and Conclusions --- p.191Chapter 10.1 --- Summary Results and Contributions --- p.192Chapter 10.2 --- Directions of Future Research --- p.199Bibliography --- p.20

    An asynchronous method for cloud-based rendering

    Get PDF
    Interactive high-fidelity rendering is still unachievable on many consumer devices. Cloud gaming services have shown promise in delivering interactive graphics beyond the individual capabilities of user devices. However, a number of shortcomings are manifest in these systems: high network bandwidths are required for higher resolutions and input lag due to network fluctuations heavily disrupts user experience. In this paper, we present a scalable solution for interactive high-fidelity graphics based on a distributed rendering pipeline where direct lighting is computed on the client device and indirect lighting in the cloud. The client device keeps a local cache for indirect lighting which is asynchronously updated using an object space representation; this allows us to achieve interactive rates that are unconstrained by network performance for a wide range of display resolutions that are also robust to input lag. Furthermore, in multi-user environments, the computation of indirect lighting is amortised over participating clients

    Recent advances in transient imaging: A computer graphics and vision perspective

    Get PDF
    Transient imaging has recently made a huge impact in the computer graphics and computer vision fields. By capturing, reconstructing, or simulating light transport at extreme temporal resolutions, researchers have proposed novel techniques to show movies of light in motion, see around corners, detect objects in highly-scattering media, or infer material properties from a distance, to name a few. The key idea is to leverage the wealth of information in the temporal domain at the pico or nanosecond resolution, information usually lost during the capture-time temporal integration. This paper presents recent advances in this field of transient imaging from a graphics and vision perspective, including capture techniques, analysis, applications and simulation

    Recent advances in transient imaging: A computer graphics and vision perspective

    Get PDF
    Transient imaging has recently made a huge impact in the computer graphics and computer vision fields. By capturing, reconstructing, or simulating light transport at extreme temporal resolutions, researchers have proposed novel techniques to show movies of light in motion, see around corners, detect objects in highly-scattering media, or infer material properties from a distance, to name a few. The key idea is to leverage the wealth of information in the temporal domain at the pico or nanosecond resolution, information usually lost during the capture-time temporal integration. This paper presents recent advances in this field of transient imaging from a graphics and vision perspective, including capture techniques, analysis, applications and simulation

    TOWARDS CASUAL APPEARANCE CAPTURE BY REFLECTANCE SYMMETRY

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Computational Imaging for Shape Understanding

    Get PDF
    Geometry is the essential property of real-world scenes. Understanding the shape of the object is critical to many computer vision applications. In this dissertation, we explore using computational imaging approaches to recover the geometry of real-world scenes. Computational imaging is an emerging technique that uses the co-designs of image hardware and computational software to expand the capacity of traditional cameras. To tackle face recognition in the uncontrolled environment, we study 2D color image and 3D shape to deal with body movement and self-occlusion. Especially, we use multiple RGB-D cameras to fuse the varying pose and register the front face in a unified coordinate system. The deep color feature and geodesic distance feature have been used to complete face recognition. To handle the underwater image application, we study the angular-spatial encoding and polarization state encoding of light rays using computational imaging devices. Specifically, we use the light field camera to tackle the challenging problem of underwater 3D reconstruction. We leverage the angular sampling of the light field for robust depth estimation. We also develop a fast ray marching algorithm to improve the efficiency of the algorithm. To deal with arbitrary reflectance, we investigate polarimetric imaging and develop polarimetric Helmholtz stereopsis that uses reciprocal polarimetric image pairs for high-fidelity 3D surface reconstruction. We formulate new reciprocity and diffuse/specular polarimetric constraints to recover surface depths and normals using an optimization framework. To recover the 3D shape in the unknown and uncontrolled natural illumination, we use two circularly polarized spotlights to boost the polarization cues corrupted by the environment lighting, as well as to provide photometric cues. To mitigate the effect of uncontrolled environment light in photometric constraints, we estimate a lighting proxy map and iteratively refine the normal and lighting estimation. Through expensive experiments on the simulated and real images, we demonstrate that our proposed computational imaging methods outperform traditional imaging approaches
    corecore