154 research outputs found

    Live User-guided Intrinsic Video For Static Scenes

    Get PDF
    We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance

    User-assisted intrinsic images

    Get PDF
    For many computational photography applications, the lighting and materials in the scene are critical pieces of information. We seek to obtain intrinsic images, which decompose a photo into the product of an illumination component that represents lighting effects and a reflectance component that is the color of the observed material. This is an under-constrained problem and automatic methods are challenged by complex natural images. We describe a new approach that enables users to guide an optimization with simple indications such as regions of constant reflectance or illumination. Based on a simple assumption on local reflectance distributions, we derive a new propagation energy that enables a closed form solution using linear least-squares. We achieve fast performance by introducing a novel downsampling that preserves local color distributions. We demonstrate intrinsic image decomposition on a variety of images and show applications.National Science Foundation (U.S.) (NSF CAREER award 0447561)Institut national de recherche en informatique et en automatique (France) (Associate Research Team “Flexible Rendering”)Microsoft Research (New Faculty Fellowship)Alfred P. Sloan Foundation (Research Fellowship)Quanta Computer, Inc. (MIT-Quanta T Party

    Scribble-based gradient mesh recoloring

    Get PDF
    Previous gradient mesh recoloring methods usually have dependencies on an additional reference image and the rasterized gradient mesh. To circumvent such dependencies, we propose a user scribble-based recoloring method, in which users are allowed to annotate gradient meshes with a few color scribbles. Our approach builds an auxiliary mesh from gradient meshes, namely control net, by taking both colors and local color gradients at mesh points into account. We then develop an extended chrominance blending method to propagate the user specified colors over the control net. The recolored gradient mesh is finally reconstructed from the recolored control net. Experiments validate the effectiveness of our approach on multiple gradient meshes. Compared with various alternative solutions, our method has no color bleedings nor sampling artifacts, and can achieve fast performance

    CNN based Learning using Reflection and Retinex Models for Intrinsic Image Decomposition

    Get PDF
    Most of the traditional work on intrinsic image decomposition rely on deriving priors about scene characteristics. On the other hand, recent research use deep learning models as in-and-out black box and do not consider the well-established, traditional image formation process as the basis of their intrinsic learning process. As a consequence, although current deep learning approaches show superior performance when considering quantitative benchmark results, traditional approaches are still dominant in achieving high qualitative results. In this paper, the aim is to exploit the best of the two worlds. A method is proposed that (1) is empowered by deep learning capabilities, (2) considers a physics-based reflection model to steer the learning process, and (3) exploits the traditional approach to obtain intrinsic images by exploiting reflectance and shading gradient information. The proposed model is fast to compute and allows for the integration of all intrinsic components. To train the new model, an object centered large-scale datasets with intrinsic ground-truth images are created. The evaluation results demonstrate that the new model outperforms existing methods. Visual inspection shows that the image formation loss function augments color reproduction and the use of gradient information produces sharper edges. Datasets, models and higher resolution images are available at https://ivi.fnwi.uva.nl/cv/retinet.Comment: CVPR 201

    Live Intrinsic Video

    Get PDF

    Rich Intrinsic Image Separation for Multi-View Outdoor Scenes

    Get PDF
    Intrinsic images aim at separating an image into its reflectance and illumination components to facilitate further analysis or manipulation. This separation is severely ill-posed and the most successful methods rely on user indications or precise geometry to resolve the ambiguities inherent to this problem. In this paper we propose a method to estimate intrinsic images from multiple views of an outdoor scene without the need for precise geometry or involved user intervention. We use multiview stereo to automatically reconstruct a 3D point cloud of the scene. Although this point cloud is sparse and incomplete, we show that it provides the necessary information to compute plausible sky and indirect illumination at each 3D point. We then introduce an optimization method to estimate sun visibility over the point cloud. This algorithm compensates for the lack of accurate geometry and allows the extraction of precise shadows in the final image. We finally propagate the information computed over the sparse point cloud to every pixel in the photograph using image-guided propagation. Our propagation not only separates reflectance from illumination, but also decomposes the illumination into a sun, sky and indirect layer. This rich decomposition allows novel image manipulations as demonstrated by our results.Nous présentons une méthode capable de décomposer les photographies d'une scène en trois composantes intrinsèques --- la réflectance, l'illumination due au soleil, l'illumination due au ciel, et l'illumination indirecte. L'extraction d'images intrinsèques à partir de photographies est un problème difficile, généralement résolu en utilisant des méthodes de propagation guidée par l'image nécessitant de multiples indications utilisateur. Des méthodes récentes en vision par ordinateur permettent l'acquisition facile mais approximative d'informations géométriques d'une scène à l'aide de plusieurs photographies selon des points de vue différents. Nous développons un nouvel algorithme qui nous permet d'exploiter cette information bruitée et peu fiable pour automatiser et améliorer les algorithmes d'estimation d'images intrinsèque par propagation. En particulier, nous développons une nouvelle approche par optimisation afin d'estimer les ombres portées dans l'image, en peaufinant une estimation initiale obtenue à partir des informations géométriques reconstruites. Dans une dernière étape nous adaptons les algorithmes de propagation guidée par l'image, en remplaçant les indications utilisateurs manuelles par les données d'ombre et de réflectance déduite du nuage de points 3D par notre algorithme. Notre méthode permet l'extraction automatique des images intrinsèques à partir de multiples points de vue, permettant ainsi de nombreux types de manipulations d'images
    • …
    corecore