405,065 research outputs found

    Deep Reflectance Maps

    Get PDF
    Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constraint nature of this inverse problem. While significant progress has been made on inferring shape, materials and illumination from images only, progress in an unconstrained setting is still limited. We propose a convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions. We achieve this in an end-to-end learning formulation that directly predicts a reflectance map from the image itself. We show how to improve estimates by facilitating additional supervision in an indirect scheme that first predicts surface orientation and afterwards predicts the reflectance map by a learning-based sparse data interpolation. In order to analyze performance on this difficult task, we propose a new challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg) using both synthetic and real images. Furthermore, we show the application of our method to a range of image-based editing tasks on real images.Comment: project page: http://homes.esat.kuleuven.be/~krematas/DRM

    Tex2Shape: Detailed Full Human Body Geometry From a Single Image

    No full text
    We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method

    Articulation-aware Canonical Surface Mapping

    Full text link
    We tackle the tasks of: 1) predicting a Canonical Surface Mapping (CSM) that indicates the mapping from 2D pixels to corresponding points on a canonical template shape, and 2) inferring the articulation and pose of the template corresponding to the input image. While previous approaches rely on keypoint supervision for learning, we present an approach that can learn without such annotations. Our key insight is that these tasks are geometrically related, and we can obtain supervisory signal via enforcing consistency among the predictions. We present results across a diverse set of animal object categories, showing that our method can learn articulation and CSM prediction from image collections using only foreground mask labels for training. We empirically show that allowing articulation helps learn more accurate CSM prediction, and that enforcing the consistency with predicted CSM is similarly critical for learning meaningful articulation.Comment: To appear at CVPR 2020, project page https://nileshkulkarni.github.io/acsm

    Understanding and optimising the packing density of perylene bisimide layers on CVD-grown graphene

    Full text link
    The non-covalent functionalisation of graphene is an attractive strategy to alter the surface chemistry of graphene without damaging its superior electrical and mechanical properties. Using the facile method of aqueous-phase functionalisation on large-scale CVD-grown graphene, we investigated the formation of different packing densities in self-assembled monolayers (SAMs) of perylene bisimide derivatives and related this to the amount of substrate contamination. We were able to directly observe wet-chemically deposited SAMs in scanning tunnelling microscopy (STM) on transferred CVD graphene and revealed that the densely packed perylene ad-layers adsorb with the conjugated {\pi}-system of the core perpendicular to the graphene substrate. This elucidation of the non-covalent functionalisation of graphene has major implications on controlling its surface chemistry and opens new pathways for adaptable functionalisation in ambient conditions and on the large scale.Comment: 27 pages (including SI), 10 figure

    Thermographic Particle Velocimetry (TPV) for Simultaneous Interfacial Temperature and Velocity Measurements

    Get PDF
    AbstractWe present an experimental technique, that we refer to as ‘thermographic particle velocimetry’ (TPV), which is capable of the simultaneous measurement of two-dimensional (2-D) surface temperature and velocity at the interface of multiphase flows. The development of the technique has been motivated by the need to study gravity-driven liquid-film flows over inclined heated substrates, however, the same measurement principle can be applied for the recovery of 2-D temperature- and velocity-field information at the interface of any flow with a sufficient density gradient between two fluid phases. The proposed technique relies on a single infrared (IR) imager and is based on the employment of highly reflective (here, silver-coated) particles which, when suspended near or at the interface, can be distinguished from the surrounding fluid domain due to their different emissivity. Image processing steps used to recover the temperature and velocity distributions include the decomposition of each original raw IR image into separate thermal and particle images, the application of perspective distortion corrections and spatial calibration, and finally the implementation of standard particle velocimetry algorithms. This procedure is demonstrated by application of the technique to a heated and stirred flow in an open container. In addition, two validation experiments are presented, one dedicated to the measurement of interfacial temperature and one to the measurement of interfacial velocity. The deviations between the results generated from TPV and those from accompanying conventional techniques do not exceed the errors associated with the latter

    Tex2Shape: Detailed Full Human Body Geometry From a Single Image

    Get PDF
    We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method
    • …
    corecore