3 research outputs found

    Static scene illumination estimation from video with applications

    Get PDF
    We present a system that automatically recovers scene geometry and illumination from a video, providing a basis for various applications. Previous image based illumination estimation methods require either user interaction or external information in the form of a database. We adopt structure-from-motion and multi-view stereo for initial scene reconstruction, and then estimate an environment map represented by spherical harmonics (as these perform better than other bases). We also demonstrate several video editing applications that exploit the recovered geometry and illumination, including object insertion (e.g., for augmented reality), shadow detection, and video relighting

    ShadowGAN: Shadow synthesis for virtual objects with conditional adversarial networks

    Full text link
    Abstract We introduce ShadowGAN, a generative adversarial network (GAN) for synthesizing shadows for virtual objects inserted in images. Given a target image containing several existing objects with shadows, and an input source object with a specified insertion position, the network generates a realistic shadow for the source object. The shadow is synthesized by a generator; using the proposed local adversarial and global adversarial discriminators, the synthetic shadow’s appearance is locally realistic in shape, and globally consistent with other objects’ shadows in terms of shadow direction and area. To overcome the lack of training data, we produced training samples based on public 3D models and rendering technology. Experimental results from a user study show that the synthetic shadowed results look natural and authentic.https://deepblue.lib.umich.edu/bitstream/2027.42/148573/1/41095_2019_Article_136.pd
    corecore