498 research outputs found

    ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision

    Full text link
    By supervising camera rays between a scene and multi-view image planes, NeRF reconstructs a neural scene representation for the task of novel view synthesis. On the other hand, shadow rays between the light source and the scene have yet to be considered. Therefore, we propose a novel shadow ray supervision scheme that optimizes both the samples along the ray and the ray location. By supervising shadow rays, we successfully reconstruct a neural SDF of the scene from single-view pure shadow or RGB images under multiple lighting conditions. Given single-view binary shadows, we train a neural network to reconstruct a complete scene not limited by the camera's line of sight. By further modeling the correlation between the image colors and the shadow rays, our technique can also be effectively extended to RGB inputs. We compare our method with previous works on challenging tasks of shape reconstruction from single-view binary shadow or RGB images and observe significant improvements. The code and data will be released.Comment: Project page: https://gerwang.github.io/shadowneus

    Video normals from colored lights

    Get PDF
    We present an algorithm and the associated single-view capture methodology to acquire the detailed 3D shape, bends, and wrinkles of deforming surfaces. Moving 3D data has been difficult to obtain by methods that rely on known surface features, structured light, or silhouettes. Multispectral photometric stereo is an attractive alternative because it can recover a dense normal field from an untextured surface. We show how to capture such data, which in turn allows us to demonstrate the strengths and limitations of our simple frame-to-frame registration over time. Experiments were performed on monocular video sequences of untextured cloth and faces with and without white makeup. Subjects were filmed under spatially separated red, green, and blue lights. Our first finding is that the color photometric stereo setup is able to produce smoothly varying per-frame reconstructions with high detail. Second, when these 3D reconstructions are augmented with 2D tracking results, one can register both the surfaces and relax the homogenous-color restriction of the single-hue subject. Quantitative and qualitative experiments explore both the practicality and limitations of this simple multispectral capture system

    Discovering Regularity in Point Clouds of Urban Scenes

    Full text link
    Despite the apparent chaos of the urban environment, cities are actually replete with regularity. From the grid of streets laid out over the earth, to the lattice of windows thrown up into the sky, periodic regularity abounds in the urban scene. Just as salient, though less uniform, are the self-similar branching patterns of trees and vegetation that line streets and fill parks. We propose novel methods for discovering these regularities in 3D range scans acquired by a time-of-flight laser sensor. The applications of this regularity information are broad, and we present two original algorithms. The first exploits the efficiency of the Fourier transform for the real-time detection of periodicity in building facades. Periodic regularity is discovered online by doing a plane sweep across the scene and analyzing the frequency space of each column in the sweep. The simplicity and online nature of this algorithm allow it to be embedded in scanner hardware, making periodicity detection a built-in feature of future 3D cameras. We demonstrate the usefulness of periodicity in view registration, compression, segmentation, and facade reconstruction. The second algorithm leverages the hierarchical decomposition and locality in space of the wavelet transform to find stochastic parameters for procedural models that succinctly describe vegetation. These procedural models facilitate the generation of virtual worlds for architecture, gaming, and augmented reality. The self-similarity of vegetation can be inferred using multi-resolution analysis to discover the underlying branching patterns. We present a unified framework of these tools, enabling the modeling, transmission, and compression of high-resolution, accurate, and immersive 3D images

    Reconstruction of intricate surfaces from scanning electron microscopy

    Get PDF
    This PhD thesis is concerned with the reconstruction of intricate shapes from scanning electron microscope (SEM) imagery. Since SEM images bear a certain resemblance to optical images, approaches developed in the wider field of computer vision can to a certain degree be applied to SEM images as well. I focus on two such approaches, namely Multiview Stereo (MVS) and Shape from Shading (SfS) and extend them to the SEM domain. The reconstruction of intricate shapes featuring thin protrusions and sparsely textured curved areas poses a significant challenge for current MVS techniques. The MVS methods I propose are designed to deal with such surfaces in particular, while also being robust to the specific problems inherent in the SEM modality: the absence of a static illumination and the unusually high noise level. I describe two different novel MVS methods aimed at narrow-baseline and medium-baseline imaging setups respectively. Both of them build on the assumption of pixelwise photoconsistency. In the SfS context, I propose a novel empirical reflectance model for SEM images that allows for an efficient inference of surface orientation from multiple observations. My reflectance model is able to model both secondary and backscattered electron emission under an arbitrary detector setup. I describe two additional methods of inferring shape using combinations of MVS and SfS approaches: the first builds on my medium-baseline MVS method, which assumes photoconsistency, and improves on it by estimating the surface orientation using my reflectance model. The second goes beyond photoconsistency and estimates the depths themselves using the reflectance model

    Rich Intrinsic Image Separation for Multi-View Outdoor Scenes

    Get PDF
    Intrinsic images aim at separating an image into its reflectance and illumination components to facilitate further analysis or manipulation. This separation is severely ill-posed and the most successful methods rely on user indications or precise geometry to resolve the ambiguities inherent to this problem. In this paper we propose a method to estimate intrinsic images from multiple views of an outdoor scene without the need for precise geometry or involved user intervention. We use multiview stereo to automatically reconstruct a 3D point cloud of the scene. Although this point cloud is sparse and incomplete, we show that it provides the necessary information to compute plausible sky and indirect illumination at each 3D point. We then introduce an optimization method to estimate sun visibility over the point cloud. This algorithm compensates for the lack of accurate geometry and allows the extraction of precise shadows in the final image. We finally propagate the information computed over the sparse point cloud to every pixel in the photograph using image-guided propagation. Our propagation not only separates reflectance from illumination, but also decomposes the illumination into a sun, sky and indirect layer. This rich decomposition allows novel image manipulations as demonstrated by our results.Nous présentons une méthode capable de décomposer les photographies d'une scène en trois composantes intrinsèques --- la réflectance, l'illumination due au soleil, l'illumination due au ciel, et l'illumination indirecte. L'extraction d'images intrinsèques à partir de photographies est un problème difficile, généralement résolu en utilisant des méthodes de propagation guidée par l'image nécessitant de multiples indications utilisateur. Des méthodes récentes en vision par ordinateur permettent l'acquisition facile mais approximative d'informations géométriques d'une scène à l'aide de plusieurs photographies selon des points de vue différents. Nous développons un nouvel algorithme qui nous permet d'exploiter cette information bruitée et peu fiable pour automatiser et améliorer les algorithmes d'estimation d'images intrinsèque par propagation. En particulier, nous développons une nouvelle approche par optimisation afin d'estimer les ombres portées dans l'image, en peaufinant une estimation initiale obtenue à partir des informations géométriques reconstruites. Dans une dernière étape nous adaptons les algorithmes de propagation guidée par l'image, en remplaçant les indications utilisateurs manuelles par les données d'ombre et de réflectance déduite du nuage de points 3D par notre algorithme. Notre méthode permet l'extraction automatique des images intrinsèques à partir de multiples points de vue, permettant ainsi de nombreux types de manipulations d'images
    • …
    corecore