237 research outputs found

    Single-Image Depth Prediction Makes Feature Matching Easier

    Get PDF
    Good local features improve the robustness of many 3D re-localization and multi-view reconstruction pipelines. The problem is that viewing angle and distance severely impact the recognizability of a local feature. Attempts to improve appearance invariance by choosing better local feature points or by leveraging outside information, have come with pre-requisites that made some of them impractical. In this paper, we propose a surprisingly effective enhancement to local feature extraction, which improves matching. We show that CNN-based depths inferred from single RGB images are quite helpful, despite their flaws. They allow us to pre-warp images and rectify perspective distortions, to significantly enhance SIFT and BRISK features, enabling more good matches, even when cameras are looking at the same scene but in opposite directions.Comment: 14 pages, 7 figures, accepted for publication at the European conference on computer vision (ECCV) 202

    Обеспечение визуальной когерентности в обучающих системах дополненной реальности с учетом авиакосмической специфики

    Get PDF
    In May 2022, Saudi Arabian Military Industries, a Saudi government agency, acquired an augmented reality training platform for pilots. In September, the Boeing Corporation began the development of an augmented reality pilot simulator. In November, a similar project was launched by BAE Systems, a leading British developer of aeronautical engineering. These facts allow us to confidently speak about the beginning of a new era of aviation simulators – simulators using the augmented reality technology. One of the promising advantages of this technology is the ability to safely simulate dangerous situations in the real world. A necessary condition for using this advantage is to ensure the visual coherence of augmented reality scenes: virtual objects must be indistinguishable from real ones. All the global IT leaders consider augmented reality as the subsequent surge of radical changes in digital electronics, so visual coherence is becoming a key issue for the future of IT, and in aerospace applications, visual coherence has already acquired practical significance. The Russian Federation lags far behind in studying the problems of visual coherence in general and for augmented reality flight simulators in particular: at the time of publication the authors managed to find only two papers on the subject in the Russian research space, while abroad their number is already approximately a thousand. The purpose of this review article is to create conditions for solving the problem. Visual coherence depends on many factors: lighting, color tone, shadows from virtual objects on real ones, mutual reflections, textures of virtual surfaces, optical aberrations, convergence and accommodation, etc. The article reviews the publications devoted to methods for assessing the conditions of illumination and color tone of a real scene and transferring them to virtual objects using various probes and by individual images, as well as by rendering virtual objects in augmented reality scenes, using neural networks.В мае 2022 года саудовская правительственная структура Saudi Arabian Military Industries приобрела обучающую платформу дополненной реальности для летчиков, в сентябре корпорация Boeing начала разработку тренажера пилота дополненной реальности, в ноябре стартовал аналогичный проект ведущего британского разработчика авиационной техники BAE Systems. Эти факты позволяют уверенно говорить о начале новой эпохи авиационных тренажеров – тренажеров с применением технологии дополненной реальности. Одно из перспективных преимуществ данной технологии – возможность безопасного моделирования опасных ситуаций в реальном мире. Необходимым условием использования этого преимущества является обеспечение визуальной когерентности сцен дополненной реальности: виртуальные объекты должны быть неотличимы от реальных. Все мировые IT-лидеры рассматривают дополненную реальность как следующую «большую волну» радикальных изменений в цифровой электронике, поэтому визуальная когерентность становится ключевым вопросом для будущего IT, а в аэрокосмических приложениях визуальная когерентность уже приобрела практическое значение. В РФ имеет место серьезное отставание в изучении проблематики визуальной когерентности в целом и для авиатренажеров дополненной реальности в частности: на момент публикации авторам удалось обнаружить в российском научном пространстве только две работы по теме, тогда как за рубежом их число уже около тысячи. Цель настоящей обзорной статьи – создать условия для купирования проблемы. Визуальная когерентность зависит от многих факторов: освещения, цветового тона, теней от виртуальных объектов на реальных, взаимных отражений, текстур виртуальных поверхностей, оптических аберраций, конвергенции и аккомодации и др. В статье анализируются публикации, посвященные методам оценки условий освещенности и цветового тона реальной сцены и переноса таковых на виртуальные объекты с использованием зондов и по отдельным изображениям, а также по рендерингу виртуальных объектов в сценах дополненной реальности, в том числе с применением нейросетей

    Development of Immersive and Interactive Virtual Reality Environment for Two-Player Table Tennis

    Get PDF
    Although the history of Virtual Reality (VR) is only about half a century old, all kinds of technologies in the VR field are developing rapidly. VR is a computer generated simulation that replaces or augments the real world by various media. In a VR environment, participants have a perception of “presence”, which can be described by the sense of immersion and intuitive interaction. One of the major VR applications is in the field of sports, in which a life-like sports environment is simulated, and the body actions of players can be tracked and represented by using VR tracking and visualisation technology. In the entertainment field, exergaming that merges video game with physical exercise activities by employing tracking or even 3D display technology can be considered as a small scale VR. For the research presented in this thesis, a novel realistic real-time table tennis game combining immersive, interactive and competitive features is developed. The implemented system integrates the InterSense tracking system, SwissRanger 3D camera and a three-wall rear projection stereoscopic screen. The Intersense tracking system is based on ultrasonic and inertia sensing techniques which provide fast and accurate 6-DOF (i.e. six degrees of freedom) tracking information of four trackers. Two trackers are placed on the two players’ heads to provide the players’ viewing positions. The other two trackers are held by players as the racquets. The SwissRanger 3D camera is mounted on top of the screen to capture the player’

    EMLight: Lighting Estimation via Spherical Distribution Approximation

    Full text link
    Illumination estimation from a single image is critical in 3D rendering and it has been investigated extensively in the computer vision and computer graphic research community. On the other hand, existing works estimate illumination by either regressing light parameters or generating illumination maps that are often hard to optimize or tend to produce inaccurate predictions. We propose Earth Mover Light (EMLight), an illumination estimation framework that leverages a regression network and a neural projector for accurate illumination estimation. We decompose the illumination map into spherical light distribution, light intensity and the ambient term, and define the illumination estimation as a parameter regression task for the three illumination components. Motivated by the Earth Mover distance, we design a novel spherical mover's loss that guides to regress light distribution parameters accurately by taking advantage of the subtleties of spherical distribution. Under the guidance of the predicted spherical distribution, light intensity and ambient term, the neural projector synthesizes panoramic illumination maps with realistic light frequency. Extensive experiments show that EMLight achieves accurate illumination estimation and the generated relighting in 3D object embedding exhibits superior plausibility and fidelity as compared with state-of-the-art methods.Comment: Accepted to AAAI 202

    Object-based Illumination Estimation with Rendering-aware Neural Networks

    Full text link
    We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas. Conventional inverse rendering is too computationally demanding for real-time applications, and the performance of purely learning-based techniques may be limited by the meager input data available from individual objects. To address these issues, we propose an approach that takes advantage of physical principles from inverse rendering to constrain the solution, while also utilizing neural networks to expedite the more computationally expensive portions of its processing, to increase robustness to noisy input data as well as to improve temporal and spatial stability. This results in a rendering-aware system that estimates the local illumination distribution at an object with high accuracy and in real time. With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene, leading to improved realism.Comment: ECCV 202
    corecore