10,773 research outputs found

    Interactive removal and ground truth for difficult shadow scenes

    Get PDF
    A user-centric method for fast, interactive, robust, and high-quality shadow removal is presented. Our algorithm can perform detection and removal in a range of difficult cases, such as highly textured and colored shadows. To perform detection, an on-the-fly learning approach is adopted guided by two rough user inputs for the pixels of the shadow and the lit area. After detection, shadow removal is performed by registering the penumbra to a normalized frame, which allows us efficient estimation of nonuniform shadow illumination changes, resulting in accurate and robust removal. Another major contribution of this work is the first validated and multiscene category ground truth for shadow removal algorithms. This data set containing 186 images eliminates inconsistencies between shadow and shadow-free images and provides a range of different shadow types such as soft, textured, colored, and broken shadow. Using this data, the most thorough comparison of state-of-the-art shadow removal methods to date is performed, showing our proposed algorithm to outperform the state of the art across several measures and shadow categories. To complement our data set, an online shadow removal benchmark website is also presented to encourage future open comparisons in this challenging field of research

    OutCast: Outdoor Single-image Relighting with Cast Shadows

    Full text link
    We propose a relighting method for outdoor images. Our method mainly focuses on predicting cast shadows in arbitrary novel lighting directions from a single image while also accounting for shading and global effects such the sun light color and clouds. Previous solutions for this problem rely on reconstructing occluder geometry, e.g. using multi-view stereo, which requires many images of the scene. Instead, in this work we make use of a noisy off-the-shelf single-image depth map estimation as a source of geometry. Whilst this can be a good guide for some lighting effects, the resulting depth map quality is insufficient for directly ray-tracing the shadows. Addressing this, we propose a learned image space ray-marching layer that converts the approximate depth map into a deep 3D representation that is fused into occlusion queries using a learned traversal. Our proposed method achieves, for the first time, state-of-the-art relighting results, with only a single image as input. For supplementary material visit our project page at: https://dgriffiths.uk/outcast.Comment: Eurographics 2022 - Accepte
    corecore