3,801 research outputs found

    CNN based Learning using Reflection and Retinex Models for Intrinsic Image Decomposition

    Get PDF
    Most of the traditional work on intrinsic image decomposition rely on deriving priors about scene characteristics. On the other hand, recent research use deep learning models as in-and-out black box and do not consider the well-established, traditional image formation process as the basis of their intrinsic learning process. As a consequence, although current deep learning approaches show superior performance when considering quantitative benchmark results, traditional approaches are still dominant in achieving high qualitative results. In this paper, the aim is to exploit the best of the two worlds. A method is proposed that (1) is empowered by deep learning capabilities, (2) considers a physics-based reflection model to steer the learning process, and (3) exploits the traditional approach to obtain intrinsic images by exploiting reflectance and shading gradient information. The proposed model is fast to compute and allows for the integration of all intrinsic components. To train the new model, an object centered large-scale datasets with intrinsic ground-truth images are created. The evaluation results demonstrate that the new model outperforms existing methods. Visual inspection shows that the image formation loss function augments color reproduction and the use of gradient information produces sharper edges. Datasets, models and higher resolution images are available at https://ivi.fnwi.uva.nl/cv/retinet.Comment: CVPR 201

    Photometric Depth Super-Resolution

    Full text link
    This study explores the use of photometric techniques (shape-from-shading and uncalibrated photometric stereo) for upsampling the low-resolution depth map from an RGB-D sensor to the higher resolution of the companion RGB image. A single-shot variational approach is first put forward, which is effective as long as the target's reflectance is piecewise-constant. It is then shown that this dependency upon a specific reflectance model can be relaxed by focusing on a specific class of objects (e.g., faces), and delegate reflectance estimation to a deep neural network. A multi-shot strategy based on randomly varying lighting conditions is eventually discussed. It requires no training or prior on the reflectance, yet this comes at the price of a dedicated acquisition setup. Both quantitative and qualitative evaluations illustrate the effectiveness of the proposed methods on synthetic and real-world scenarios.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2019. First three authors contribute equall

    Recovering facial shape using a statistical model of surface normal direction

    Get PDF
    In this paper, we show how a statistical model of facial shape can be embedded within a shape-from-shading algorithm. We describe how facial shape can be captured using a statistical model of variations in surface normal direction. To construct this model, we make use of the azimuthal equidistant projection to map the distribution of surface normals from the polar representation on a unit sphere to Cartesian points on a local tangent plane. The distribution of surface normal directions is captured using the covariance matrix for the projected point positions. The eigenvectors of the covariance matrix define the modes of shape-variation in the fields of transformed surface normals. We show how this model can be trained using surface normal data acquired from range images and how to fit the model to intensity images of faces using constraints on the surface normal direction provided by Lambert's law. We demonstrate that the combination of a global statistical constraint and local irradiance constraint yields an efficient and accurate approach to facial shape recovery and is capable of recovering fine local surface details. We assess the accuracy of the technique on a variety of images with ground truth and real-world images

    Is countershading camouflage robust to lighting change due to weather?

    Get PDF
    Countershading is a pattern of coloration thought to have evolved in order to implement camouflage. By adopting a pattern of coloration that makes the surface facing towards the sun darker and the surface facing away from the sun lighter, the overall amount of light reflected off an animal can be made more uniformly bright. Countershading could hence contribute to visual camouflage by increasing background matching or reducing cues to shape. However, the usefulness of countershading is constrained by a particular pattern delivering ‘optimal’ camouflage only for very specific lighting conditions. In this study, we test the robustness of countershading camouflage to lighting change due to weather, using human participants as a ‘generic’ predator. In a simulated three-dimensional environment, we constructed an array of simple leaf-shaped items and a single ellipsoidal target ‘prey’. We set these items in two light environments: strongly directional ‘sunny’ and more diffuse ‘cloudy’. The target object was given the optimal pattern of countershading for one of these two environment types or displayed a uniform pattern. By measuring detection time and accuracy, we explored whether and how target detection depended on the match between the pattern of coloration on the target object and scene lighting. Detection times were longest when the countershading was appropriate to the illumination; incorrectly camouflaged targets were detected with a similar pattern of speed and accuracy to uniformly coloured targets. We conclude that structural changes in light environment, such as caused by differences in weather, do change the effectiveness of countershading camouflage

    Establishing the behavioural limits for countershaded camouflage

    Get PDF
    Countershading is a ubiquitous patterning of animals whereby the side that typically faces the highest illumination is darker. When tuned to specific lighting conditions and body orientation with respect to the light field, countershading minimizes the gradient of light the body reflects by counterbalancing shadowing due to illumination, and has therefore classically been thought of as an adaptation for visual camouflage. However, whether and how crypsis degrades when body orientation with respect to the light field is non-optimal has never been studied. We tested the behavioural limits on body orientation for countershading to deliver effective visual camouflage. We asked human participants to detect a countershaded target in a simulated three-dimensional environment. The target was optimally coloured for crypsis in a reference orientation and was displayed at different orientations. Search performance dramatically improved for deviations beyond 15 degrees. Detection time was significantly shorter and accuracy significantly higher than when the target orientation matched the countershading pattern. This work demonstrates the importance of maintaining body orientation appropriate for the displayed camouflage pattern, suggesting a possible selective pressure for animals to orient themselves appropriately to enhance crypsis

    Effects of Highlights on Gloss Perception

    Full text link
    The perception of a glossy surface in a static monochromatic image can occur when a bright highlight is embedded in a compatible context of shading and a bounding contour. Some images naturally give rise to the impression that a surface has a uniform reflectance, characteristic of a shiny object, even though the highlight may only cover a small portion of the surface. Nonetheless, an observer may adopt an attitude of scrutiny in viewing a glossy surface, whereby the impression of gloss is partial and nonuniform at image regions outside of a higlight. Using a rating scale and small probe points to indicate image locations, differential perception of gloss within a single object is investigate in the present study. Observers' gloss ratings are not uniform across the surface, but decrease as a function of distance from highlight. When, by design, the distance from a highlight is uncoupled from the luminance value at corresponding probe points, the decrease in rated gloss correlates more with the distance than with the luminance change. Experiments also indicate that gloss ratings change as a function of estimated surface distance, rather than as a function of image distance. Surface continuity affects gloss ratings, suggesting that apprehension of 3D surface structure is crucial for gloss perception.Air Force Office of Scientific Research (F49620-98-1-0108), Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409), National Science Foundation (IIS-97-20333); Office of Naval Research (N00014-95-1-0657, N00014-01-1-0624); Whitaker Foundation (RG-99-0186
    • …
    corecore