13 research outputs found

    Cuboid-maps for indoor illumination modeling and augmented reality rendering

    Get PDF
    This thesis proposes a novel approach for indoor scene illumination modeling and augmented reality rendering. Our key observation is that an indoor scene is well represented by a set of rectangular spaces, where important illuminants reside on their boundary faces, such as a window on a wall or a ceiling light. Given a perspective image or a panorama and detected rectangular spaces as inputs, we estimate their cuboid shapes, and infer illumination components for each face of the cuboids by a simple convolutional neural architecture. The process turns an image into a set of cuboid environment maps, each of which is a simple extension of a traditional cube-map. For augmented reality rendering, we simply take a linear combination of inferred environment maps and an input image, producing surprisingly realistic illumination effects. This approach is simple and efficient, avoids flickering, and achieves quantitatively more accurate and qualitatively more realistic effects than competing substantially more complicated systems

    Neural Illumination: Lighting Prediction for Indoor Environments

    Full text link
    This paper addresses the task of estimating the light arriving from all directions to a 3D point observed at a selected pixel in an RGB image. This task is challenging because it requires predicting a mapping from a partial scene observation by a camera to a complete illumination map for a selected position, which depends on the 3D location of the selection, the distribution of unobserved light sources, the occlusions caused by scene geometry, etc. Previous methods attempt to learn this complex mapping directly using a single black-box neural network, which often fails to estimate high-frequency lighting details for scenes with complicated 3D geometry. Instead, we propose "Neural Illumination" a new approach that decomposes illumination prediction into several simpler differentiable sub-tasks: 1) geometry estimation, 2) scene completion, and 3) LDR-to-HDR estimation. The advantage of this approach is that the sub-tasks are relatively easy to learn and can be trained with direct supervision, while the whole pipeline is fully differentiable and can be fine-tuned with end-to-end supervision. Experiments show that our approach performs significantly better quantitatively and qualitatively than prior work

    Обеспечение визуальной когерентности в обучающих системах дополненной реальности с учетом авиакосмической специфики

    Get PDF
    In May 2022, Saudi Arabian Military Industries, a Saudi government agency, acquired an augmented reality training platform for pilots. In September, the Boeing Corporation began the development of an augmented reality pilot simulator. In November, a similar project was launched by BAE Systems, a leading British developer of aeronautical engineering. These facts allow us to confidently speak about the beginning of a new era of aviation simulators – simulators using the augmented reality technology. One of the promising advantages of this technology is the ability to safely simulate dangerous situations in the real world. A necessary condition for using this advantage is to ensure the visual coherence of augmented reality scenes: virtual objects must be indistinguishable from real ones. All the global IT leaders consider augmented reality as the subsequent surge of radical changes in digital electronics, so visual coherence is becoming a key issue for the future of IT, and in aerospace applications, visual coherence has already acquired practical significance. The Russian Federation lags far behind in studying the problems of visual coherence in general and for augmented reality flight simulators in particular: at the time of publication the authors managed to find only two papers on the subject in the Russian research space, while abroad their number is already approximately a thousand. The purpose of this review article is to create conditions for solving the problem. Visual coherence depends on many factors: lighting, color tone, shadows from virtual objects on real ones, mutual reflections, textures of virtual surfaces, optical aberrations, convergence and accommodation, etc. The article reviews the publications devoted to methods for assessing the conditions of illumination and color tone of a real scene and transferring them to virtual objects using various probes and by individual images, as well as by rendering virtual objects in augmented reality scenes, using neural networks.В мае 2022 года саудовская правительственная структура Saudi Arabian Military Industries приобрела обучающую платформу дополненной реальности для летчиков, в сентябре корпорация Boeing начала разработку тренажера пилота дополненной реальности, в ноябре стартовал аналогичный проект ведущего британского разработчика авиационной техники BAE Systems. Эти факты позволяют уверенно говорить о начале новой эпохи авиационных тренажеров – тренажеров с применением технологии дополненной реальности. Одно из перспективных преимуществ данной технологии – возможность безопасного моделирования опасных ситуаций в реальном мире. Необходимым условием использования этого преимущества является обеспечение визуальной когерентности сцен дополненной реальности: виртуальные объекты должны быть неотличимы от реальных. Все мировые IT-лидеры рассматривают дополненную реальность как следующую «большую волну» радикальных изменений в цифровой электронике, поэтому визуальная когерентность становится ключевым вопросом для будущего IT, а в аэрокосмических приложениях визуальная когерентность уже приобрела практическое значение. В РФ имеет место серьезное отставание в изучении проблематики визуальной когерентности в целом и для авиатренажеров дополненной реальности в частности: на момент публикации авторам удалось обнаружить в российском научном пространстве только две работы по теме, тогда как за рубежом их число уже около тысячи. Цель настоящей обзорной статьи – создать условия для купирования проблемы. Визуальная когерентность зависит от многих факторов: освещения, цветового тона, теней от виртуальных объектов на реальных, взаимных отражений, текстур виртуальных поверхностей, оптических аберраций, конвергенции и аккомодации и др. В статье анализируются публикации, посвященные методам оценки условий освещенности и цветового тона реальной сцены и переноса таковых на виртуальные объекты с использованием зондов и по отдельным изображениям, а также по рендерингу виртуальных объектов в сценах дополненной реальности, в том числе с применением нейросетей

    Generating Light Estimation for Mixed-reality Devices through Collaborative Visual Sensing

    Get PDF
    abstract: Mixed reality mobile platforms co-locate virtual objects with physical spaces, creating immersive user experiences. To create visual harmony between virtual and physical spaces, the virtual scene must be accurately illuminated with realistic physical lighting. To this end, a system was designed that Generates Light Estimation Across Mixed-reality (GLEAM) devices to continually sense realistic lighting of a physical scene in all directions. GLEAM optionally operate across multiple mobile mixed-reality devices to leverage collaborative multi-viewpoint sensing for improved estimation. The system implements policies that prioritize resolution, coverage, or update interval of the illumination estimation depending on the situational needs of the virtual scene and physical environment. To evaluate the runtime performance and perceptual efficacy of the system, GLEAM was implemented on the Unity 3D Game Engine. The implementation was deployed on Android and iOS devices. On these implementations, GLEAM can prioritize dynamic estimation with update intervals as low as 15 ms or prioritize high spatial quality with update intervals of 200 ms. User studies across 99 participants and 26 scene comparisons reported a preference towards GLEAM over other lighting techniques in 66.67% of the presented augmented scenes and indifference in 12.57% of the scenes. A controlled lighting user study on 18 participants revealed a general preference for policies that strike a balance between resolution and update rate.Dissertation/ThesisMasters Thesis Computer Science 201

    Estimating Reflectance Properties and Reilluminating Scenes Using Physically Based Rendering and Deep Neural Networks

    Get PDF
    Estimating material properties and modeling the appearance of an object under varying illumination conditions is a complex process. In this thesis, we address the problem by proposing a novel framework to re-illuminate scenes by recovering the reflectance properties. Uniquely, following a divide-and-conquer approach, we recast the problem into its two constituent sub-problems. In the first sub-problem, we have developed a synthetic dataset of spheres with realistic materials. The dataset has a wide range of material properties, rendered from varying viewpoints and under fixed directional light. Images from the dataset are further processed and used as reflectance maps used during the training process of the network. In the second sub-problem, reflectance maps are created for scenes by reorganizing the outgoing radiances recorded in the multi-view images. The network trained on the synthetic dataset, is used to infer the material properties of the reflectance maps, acquired for the test scenes. These predictions are reused to relight the scenes from novel viewpoints and different lighting conditions using path tracing. A number of experiments are conducted and performances are reported using different metrics to justify our design decisions and the choice of our network. We also show that, using multi-view images, the camera properties and the geometry of a scene, our technique can successfully predict the reflectance properties using our trained network within seconds. In the end, we also present the visual results of re-illumination on several scenes under different lighting conditions

    Real-time Illumination and Visual Coherence for Photorealistic Augmented/Mixed Reality

    Get PDF
    A realistically inserted virtual object in the real-time physical environment is a desirable feature in augmented reality (AR) applications and mixed reality (MR) in general. This problem is considered a vital research area in computer graphics, a field that is experiencing ongoing discovery. The algorithms and methods used to obtain dynamic and real-time illumination measurement, estimating, and rendering of augmented reality scenes are utilized in many applications to achieve a realistic perception by humans. We cannot deny the powerful impact of the continuous development of computer vision and machine learning techniques accompanied by the original computer graphics and image processing methods to provide a significant range of novel AR/MR techniques. These techniques include methods for light source acquisition through image-based lighting or sampling, registering and estimating the lighting conditions, and composition of global illumination. In this review, we discussed the pipeline stages with the details elaborated about the methods and techniques that contributed to the development of providing a photo-realistic rendering, visual coherence, and interactive real-time illumination results in AR/MR
    corecore