20 research outputs found

    Learning Lightprobes for Mixed Reality Illumination

    Get PDF
    This paper presents the first photometric registration pipeline for Mixed Reality based on high quality illumination estimation by convolutional neural network (CNN) methods. For easy adaptation and deployment of the system, we train the CNN using purely synthetic images and apply them to real image data. To keep the pipeline accurate and efficient, we propose to fuse the light estimation results from multiple CNN instances, and we show an approach for caching estimates over time. For optimal performance, we furthermore explore multiple strategies for the CNN training. Experimental results show that the proposed method yields highly accurate estimates for photo-realistic augmentations

    Accidental Light Probes

    Full text link
    Recovering lighting in a scene from a single image is a fundamental problem in computer vision. While a mirror ball light probe can capture omnidirectional lighting, light probes are generally unavailable in everyday images. In this work, we study recovering lighting from accidental light probes (ALPs) -- common, shiny objects like Coke cans, which often accidentally appear in daily scenes. We propose a physically-based approach to model ALPs and estimate lighting from their appearances in single images. The main idea is to model the appearance of ALPs by photogrammetrically principled shading and to invert this process via differentiable rendering to recover incidental illumination. We demonstrate that we can put an ALP into a scene to allow high-fidelity lighting estimation. Our model can also recover lighting for existing images that happen to contain an ALP.Comment: CVPR2023. Project website: https://kovenyu.com/ALP

    EverLight: Indoor-Outdoor Editable HDR Lighting Estimation

    Full text link
    Because of the diversity in lighting environments, existing illumination estimation techniques have been designed explicitly on indoor or outdoor environments. Methods have focused specifically on capturing accurate energy (e.g., through parametric lighting models), which emphasizes shading and strong cast shadows; or producing plausible texture (e.g., with GANs), which prioritizes plausible reflections. Approaches which provide editable lighting capabilities have been proposed, but these tend to be with simplified lighting models, offering limited realism. In this work, we propose to bridge the gap between these recent trends in the literature, and propose a method which combines a parametric light model with 360{\deg} panoramas, ready to use as HDRI in rendering engines. We leverage recent advances in GAN-based LDR panorama extrapolation from a regular image, which we extend to HDR using parametric spherical gaussians. To achieve this, we introduce a novel lighting co-modulation method that injects lighting-related features throughout the generator, tightly coupling the original or edited scene illumination within the panorama generation process. In our representation, users can easily edit light direction, intensity, number, etc. to impact shading while providing rich, complex reflections while seamlessly blending with the edits. Furthermore, our method encompasses indoor and outdoor environments, demonstrating state-of-the-art results even when compared to domain-specific methods.Comment: 11 pages, 7 figure

    A Real-time Method for Inserting Virtual Objects into Neural Radiance Fields

    Full text link
    We present the first real-time method for inserting a rigid virtual object into a neural radiance field, which produces realistic lighting and shadowing effects, as well as allows interactive manipulation of the object. By exploiting the rich information about lighting and geometry in a NeRF, our method overcomes several challenges of object insertion in augmented reality. For lighting estimation, we produce accurate, robust and 3D spatially-varying incident lighting that combines the near-field lighting from NeRF and an environment lighting to account for sources not covered by the NeRF. For occlusion, we blend the rendered virtual object with the background scene using an opacity map integrated from the NeRF. For shadows, with a precomputed field of spherical signed distance field, we query the visibility term for any point around the virtual object, and cast soft, detailed shadows onto 3D surfaces. Compared with state-of-the-art techniques, our approach can insert virtual object into scenes with superior fidelity, and has a great potential to be further applied to augmented reality systems

    Generating Light Estimation for Mixed-reality Devices through Collaborative Visual Sensing

    Get PDF
    abstract: Mixed reality mobile platforms co-locate virtual objects with physical spaces, creating immersive user experiences. To create visual harmony between virtual and physical spaces, the virtual scene must be accurately illuminated with realistic physical lighting. To this end, a system was designed that Generates Light Estimation Across Mixed-reality (GLEAM) devices to continually sense realistic lighting of a physical scene in all directions. GLEAM optionally operate across multiple mobile mixed-reality devices to leverage collaborative multi-viewpoint sensing for improved estimation. The system implements policies that prioritize resolution, coverage, or update interval of the illumination estimation depending on the situational needs of the virtual scene and physical environment. To evaluate the runtime performance and perceptual efficacy of the system, GLEAM was implemented on the Unity 3D Game Engine. The implementation was deployed on Android and iOS devices. On these implementations, GLEAM can prioritize dynamic estimation with update intervals as low as 15 ms or prioritize high spatial quality with update intervals of 200 ms. User studies across 99 participants and 26 scene comparisons reported a preference towards GLEAM over other lighting techniques in 66.67% of the presented augmented scenes and indifference in 12.57% of the scenes. A controlled lighting user study on 18 participants revealed a general preference for policies that strike a balance between resolution and update rate.Dissertation/ThesisMasters Thesis Computer Science 201

    Automatically augmented and annotated urban datasets using mixed reality

    Get PDF
    This project addresses the topic of database augmentation through a focus on process automation. In the field of autonomous driving systems the datasets used by the learning algorithms are decisive. Furthermore, the underlying machine learning systems would always benefit from having further quality data to learn from. A recent work on the topic of image dataset augmentation has been published, but it did not focus on the automation of the process, and it only involved the addition of cars to the existing images. On the other hand, our project has been developed to also support other kinds of objects. Moreover, our work has centered on developing an automatic pipeline that enables a continual augmentation of the dataset. Thanks to the efforts invested into the analysis of the source images and the automated rendering of virtual objects we can now produce augmented versions of the source images with relative ease.Este proyecto aborda el tema del aumento de conuntos de datos a través de un enfoque en la automatización de procesos. En el campo de los sistemas de conducción autónomos, los conjuntos de datos utilizados por los algoritmos de aprendizaje son decisivos. Además, los sistemas subyacentes de aprendizaje automático siempre se pueden beneficiar de contar con más datos de calidad de los que aprender. Recientemente se ha publicado un trabajo sobre el tema del aumento del conjunto de datos de imágenes, pero no está centrado en la automatización del proceso, y solo incluye la adición de automóviles a las imágenes existentes. En cambio, nuestro proyecto se ha desarrollado para admitir otros tipos de objetos. Además, nos hemos centrado en desarrollar un proceso automático que permita un aumento continuo del conjunto de datos. Gracias al esfuerzo invertido en el análisis de las imágenes originales y la representación automática de objetos virtuales, ahora podemos producir versiones aumentadas de las imágenes originales con relativa facilidad.Aquest projecte aborda el tema de l'augment de conjunts dades mitjançant un enfocament en l'automatització de processos. En el camp dels sistemes de conducció autònoms, els conjunts de dades utilitzats pels algoritmes d'aprenentatge són decisius. A més, els sistemes subjacents d'aprenentatge computacional sempre es poden beneficiar de tenir més dades de qualitat de les que aprendre. S'ha publicat recentment un treball sobre l'augment de conjunts de dades d'imatges, però no està centrat en l'automatització del procés i només involucra l'addició de cotxes a les imatges existents. D'altra banda, el nostre projecte s'ha desenvolupat per a donar suport a més tipus d'objectes i s'ha centrat en desenvolupar un procés automàtic que permeti augmentar de forma contínua el conjunt de dades. Gràcies als esforços invertits en l'anàlisi de les imatges originals i en el renderitzat automàtic d'objectes virtuals, ara podem produir versions augmentades de les imatges originals amb relativa facilitat

    LIME: Live Intrinsic Material Estimation

    Get PDF
    We present the first end to end approach for real time material estimation for general object shapes with uniform material that only requires a single color image as input. In addition to Lambertian surface properties, our approach fully automatically computes the specular albedo, material shininess, and a foreground segmentation. We tackle this challenging and ill posed inverse rendering problem using recent advances in image to image translation techniques based on deep convolutional encoder decoder architectures. The underlying core representations of our approach are specular shading, diffuse shading and mirror images, which allow to learn the effective and accurate separation of diffuse and specular albedo. In addition, we propose a novel highly efficient perceptual rendering loss that mimics real world image formation and obtains intermediate results even during run time. The estimation of material parameters at real time frame rates enables exciting mixed reality applications, such as seamless illumination consistent integration of virtual objects into real world scenes, and virtual material cloning. We demonstrate our approach in a live setup, compare it to the state of the art, and demonstrate its effectiveness through quantitative and qualitative evaluation.Comment: 17 pages, Spotlight paper in CVPR 201

    Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses

    Get PDF
    Shadow-retargeting maps depict the appearance of real shadows to virtual shadows given corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from unoccluded real-shadow samples observed in the camera frame, this method efficiently recovers deformed shadow appearance. In this manuscript, we introduce a light-estimation approach that enables light-source detection using flat Fresnel lenses that allow this method to work without a set of pre-established conditions. We extend the adeptness of this approach by handling scenarios with multiple receiver surfaces and a non-grounded occluder with high accuracy. Results are presented on a range of objects, deformations, and illumination conditions in real-time Augmented Reality (AR) on a mobile device. We demonstrate the practical application of the method in generating otherwise laborious in-betweening frames for 3D printed stop-motion animatio
    corecore