2,138 research outputs found

    Deep Reflectance Maps

    Get PDF
    Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constraint nature of this inverse problem. While significant progress has been made on inferring shape, materials and illumination from images only, progress in an unconstrained setting is still limited. We propose a convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions. We achieve this in an end-to-end learning formulation that directly predicts a reflectance map from the image itself. We show how to improve estimates by facilitating additional supervision in an indirect scheme that first predicts surface orientation and afterwards predicts the reflectance map by a learning-based sparse data interpolation. In order to analyze performance on this difficult task, we propose a new challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg) using both synthetic and real images. Furthermore, we show the application of our method to a range of image-based editing tasks on real images.Comment: project page: http://homes.esat.kuleuven.be/~krematas/DRM

    Digital data from shuttle photography: The effects of platform variables

    Get PDF
    Two major criticisms of using Shuttle hand held photography as an Earth science sensor are that it is nondigital, nonquantitative and that it has inconsistent platform characteristics, e.g., variable look angles, especially as compared to remote sensing satellites such as LANDSAT and SPOT. However, these criticisms are assumptions and have not been systematically investigated. The spectral effects of off-nadir views of hand held photography from the Shuttle and their role in interpretation of lava flow morphology on the island of Hawaii are studied. Digitization of photography at JSC and use of LIPS image analysis software in obtaining data is discussed. Preliminary interpretative results of one flow are given. Most of the time was spent in developing procedures and overcoming equipment problems. Preliminary data are satisfactory for detailed analysis

    Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz

    Full text link
    The reconstruction of dense 3D models of face geometry and appearance from a single image is highly challenging and ill-posed. To constrain the problem, many approaches rely on strong priors, such as parametric face models learned from limited 3D scan data. However, prior models restrict generalization of the true diversity in facial geometry, skin reflectance and illumination. To alleviate this problem, we present the first approach that jointly learns 1) a regressor for face shape, expression, reflectance and illumination on the basis of 2) a concurrently learned parametric face model. Our multi-level face model combines the advantage of 3D Morphable Models for regularization with the out-of-space generalization of a learned corrective space. We train end-to-end on in-the-wild images without dense annotations by fusing a convolutional encoder with a differentiable expert-designed renderer and a self-supervised training loss, both defined at multiple detail levels. Our approach compares favorably to the state-of-the-art in terms of reconstruction quality, better generalizes to real world faces, and runs at over 250 Hz.Comment: CVPR 2018 (Oral). Project webpage: https://gvv.mpi-inf.mpg.de/projects/FML

    Optimal land cover mapping and change analysis in northeastern oregon using landsat imagery.

    Get PDF
    Abstract The necessity for the development of repeatable, efficient, and accurate monitoring of land cover change is paramount to successful management of our planet’s natural resources. This study evaluated a number of remote sensing methods for classifying land cover and land cover change throughout a two-county area in northeastern Oregon (1986 to 2011). In the past three decades, this region has seen significant changes in forest management that have affected land use and land cover. This study employed an accuracy assessment-based empirical approach to test the optimality of a number of advanced digital image processing techniques that have recently emerged in the field of remote sensing. The accuracies are assessed using traditional error matrices, calculated using reference data obtained in the field. We found that, for single-time land cover classification, Bayes pixel-based classification using samples created with scale and shape segmentation parameters of 8 and 0.3, respectively, resulted in the highest overall accuracy. For land cover change detection, using Landsat-5 TM band 7 with a change threshold of 1.75 standard deviations resulted in the highest accuracy for forest harvesting and regeneration mapping

    Retinex theory for color image enhancement: A systematic review

    Get PDF
    A short but comprehensive review of Retinex has been presented in this paper. Retinex theory aims to explain human color perception. In addition, its derivation on modifying the reflectance components has introduced effective approaches for images contrast enhancement. In this review, the classical theory of Retinex has been covered. Moreover, advance and improved techniques of Retinex, proposed in the literature, have been addressed. Strength and weakness aspects of each technique are discussed and compared. An optimum parameter is needed to be determined to define the image degradation level. Such parameter determination would help in quantifying the amount of adjustment in the Retinex theory. Thus, a robust framework to modify the reflectance component of the Retinex theory can be developed to enhance the overall quality of color images

    State of the Art on Neural Rendering

    Get PDF
    Efficient rendering of photo-realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo-realistic images from hand-crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo-realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state-of-the-art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photo-realistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. This state-of-the-art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems

    コンピュータビジョン・グラフィックスのための影の消去と補間

    Get PDF
    University of Tokyo (東京大学
    corecore