614 research outputs found

    Planar PØP: feature-less pose estimation with applications in UAV localization

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.We present a featureless pose estimation method that, in contrast to current Perspective-n-Point (PnP) approaches, it does not require n point correspondences to obtain the camera pose, allowing for pose estimation from natural shapes that do not necessarily have distinguished features like corners or intersecting edges. Instead of using n correspondences (e.g. extracted with a feature detector) we will use the raw polygonal representation of the observed shape and directly estimate the pose in the pose-space of the camera. This method compared with a general PnP method, does not require n point correspondences neither a priori knowledge of the object model (except the scale), which is registered with a picture taken from a known robot pose. Moreover, we achieve higher precision because all the information of the shape contour is used to minimize the area between the projected and the observed shape contours. To emphasize the non-use of n point correspondences between the projected template and observed contour shape, we call the method Planar PØP. The method is shown both in simulation and in a real application consisting on a UAV localization where comparisons with a precise ground-truth are provided.Peer ReviewedPostprint (author's final draft

    Feeling crowded yet?: Crowd simulations for VR

    Get PDF
    With advances in virtual reality technology and its multiple applications, the need for believable, immersive virtual environments is increasing. Even though current computer graphics methods allow us to develop highly realistic virtual worlds, the main element failing to enhance presence is autonomous groups of human inhabitants. A great number of crowd simulation techniques have emerged in the last decade, but critical details in the crowd's movements and appearance do not meet the standards necessary to convince VR participants that they are present in a real crowd. In this paper, we review recent advances in the creation of immersive virtual crowds and discuss areas that require further work to turn these simulations into more fully immersive and believable experiences.Peer ReviewedPostprint (author's final draft

    An Empirical Evaluation of the Performance of Real-Time Illumination Approaches: Realistic Scenes in Augmented Reality

    Get PDF
    Augmented, Virtual, and Mixed Reality (AR/VR/MR) systems have been developed in general, with many of these applications having accomplished significant results, rendering a virtual object in the appropriate illumination model of the real environment is still under investigation. The entertainment industry has presented an astounding outcome in several media form, albeit the rendering process has mostly been done offline. The physical scene contains the illumination information which can be sampled and then used to render the virtual objects in real-time for realistic scene. In this paper, we evaluate the accuracy of our previous and current developed systems that provide real-time dynamic illumination for coherent interactive augmented reality based on the virtual object’s appearance in association with the real world and related criteria. The system achieves that through three simultaneous aspects. (1) The first is to estimate the incident light angle in the real environment using a live-feed 360∘ camera instrumented on an AR device. (2) The second is to simulate the reflected light using two routes: (a) global cube map construction and (b) local sampling. (3) The third is to define the shading properties for the virtual object to depict the correct lighting assets and suitable shadowing imitation. Finally, the performance efficiency is examined in both routes of the system to reduce the general cost. Also, The results are evaluated through shadow observation and user study

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    A Real-time Method for Inserting Virtual Objects into Neural Radiance Fields

    Full text link
    We present the first real-time method for inserting a rigid virtual object into a neural radiance field, which produces realistic lighting and shadowing effects, as well as allows interactive manipulation of the object. By exploiting the rich information about lighting and geometry in a NeRF, our method overcomes several challenges of object insertion in augmented reality. For lighting estimation, we produce accurate, robust and 3D spatially-varying incident lighting that combines the near-field lighting from NeRF and an environment lighting to account for sources not covered by the NeRF. For occlusion, we blend the rendered virtual object with the background scene using an opacity map integrated from the NeRF. For shadows, with a precomputed field of spherical signed distance field, we query the visibility term for any point around the virtual object, and cast soft, detailed shadows onto 3D surfaces. Compared with state-of-the-art techniques, our approach can insert virtual object into scenes with superior fidelity, and has a great potential to be further applied to augmented reality systems

    Real-Time Estimation of Illumination Direction for Augmented Reality with Low-Cost Sensors

    Get PDF
    In recent years, Augmented Reality has become a very popular topic, both as a research and commercial field. This trend has originated with the use of mobile devices as computational core and display. The appearance of virtual objects and their interaction with the real world is a key element in the success of an Augmented Reality software. A common issue in this type of software is the visual inconsistency between the virtual and real objects due to wrong illumination. Although illumination is a common research topic in Computer Graphics, few studies have been made about real time estimation of illumination direction. In this work we present a low-cost approach to detect the direction of the environment illumination, allowing the illumination of virtual objects according to the real light of the ambient, improving the integration of the scene. Our solution is open-source, based on Arduino hardware and the presented system was developed on Android.XIV Workshop Computación Gráfica, Imágenes y Visualización (WCGIV).Red de Universidades con Carreras en Informática (RedUNCI

    Real-Time Estimation of Illumination Direction for Augmented Reality with Low-Cost Sensors

    Get PDF
    In recent years, Augmented Reality has become a very popular topic, both as a research and commercial field. This trend has originated with the use of mobile devices as computational core and display. The appearance of virtual objects and their interaction with the real world is a key element in the success of an Augmented Reality software. A common issue in this type of software is the visual inconsistency between the virtual and real objects due to wrong illumination. Although illumination is a common research topic in Computer Graphics, few studies have been made about real time estimation of illumination direction. In this work we present a low-cost approach to detect the direction of the environment illumination, allowing the illumination of virtual objects according to the real light of the ambient, improving the integration of the scene. Our solution is open-source, based on Arduino hardware and the presented system was developed on Android.XIV Workshop Computación Gráfica, Imágenes y Visualización (WCGIV).Red de Universidades con Carreras en Informática (RedUNCI
    • …
    corecore