16 research outputs found

    Outdoor Augmented Reality: State of the Art and Issues

    Get PDF
    International audienceThe goal of an outdoor augmented reality system is to allow the human operator to move freely without restraint in its environment, to view and interact in real time with geo-referenced data via mobile wireless devices. This requires proposing new techniques for 3D localization, visualization and 3D interaction, adapted to working conditions in outdoor environment (brightness variation, features of displays used, etc.). This paper surveys recent advances in outdoor augmented reality. It resumes a large retrospective of the work carried out in this field, especially on methodological aspects (localization methods, generation of 3D models, visualization and interaction approaches), technological aspects (sensors, visualization devices and architecture software) and industrial aspects

    Localisation 3D basée sur une approche de suppléance multi-capteurs pour la Réalité Augmentée Mobile en Milieu Extérieur

    No full text
    The democratization of mobile devices such as smartphones, PDAs or tablet-PCs makes it possible to use Augmented Reality systems in large scale environments. However, in order to implement such systems, many issues must be adressed. Among them, 3D localization is one of the most important. Indeed, the estimation of the position and orientation (also called pose) of the viewpoint (of the camera or the user) allows to register the virtual objects over the visible part of the real world. In this paper, we present an original localization system for large scale environments which uses a markerless vision-based approach to estimate the camera pose. It relies on natural feature points extracted from images. Since this type of method is sensitive to brightness changes, occlusions and sudden motion which are likely to occur in outdoor environment, we use two more sensors to assist the vision process. In our work, we would like to demonstrate the feasibility of an assistance scheme in large scale outdoor environment. The intent is to provide a fallback system for the vision in case of failure as well as to reinitialize the vision system when needed. The complete localization system aims to be autonomous and adaptable to different situations. We present here an overview of our system, its performance and some results obtained from experiments performed in an outdoor environment under real conditions.La démocratisation des terminaux mobiles telle que les téléphones cellulaires, les PDAs et les tablettes PC a rendu possible le déploiement de la réalité augmentée dans des environnements en extérieur à grande échelle. Cependant, afin de mettre en oeuvre de tels systèmes, différentes problématiques doivent êtres traitées. Parmi elle, la localisation représente l?une des plus importantes. En effet, l?estimation de la position et de l?orientation (appelée pose) du point de vue (de la caméra ou de l?utilisateur) permet de recaler les objets virtuels sur les parties observées de la scène réelle. Dans nos travaux de thèse, nous présentons un système de localisation original destiné à des environnements à grande échelle qui utilise une approche basée vision sans marqueur pour l?estimation de la pose de la caméra. Cette approche se base sur des points caractéristiques naturels extraits des images. Etant donné que ce type d?approche est sensible aux variations de luminosité, aux occultations et aux mouvements brusques de la caméra, qui sont susceptibles de survenir dans l?environnement extérieur, nous utilisons deux autres types de capteurs afin d?assister le processus de vision. Dans nos travaux, nous voulons démontrer la faisabilité d?un schéma de suppléance dans des environnements extérieurs à large échelle. Le but est de fournir un système palliatif à la vision en cas de défaillance permettant également de réinitialiser le système de vision en cas de besoin. Le système de localisation vise à être autonome et adaptable aux différentes situations rencontrées

    3D camera tracking for mixed reality using multi sensors technology

    No full text
    International audienceThe concept of Mixed Reality (MR) aims at completing our perception of the real world, by adding fictitious elements that are not perceptible naturally such as: Computer generated images, virtual objects, texts, symbols, graphics, sounds, smells, et cetera. One of the major challenges for efficient Mixed Reality system is to ensure the spatiotemporal coherence of the augmented scene between the virtual and the real objects. The quality of the Real/Virtual registration depends mainly on the accuracy of the 3D camera pose estimation. The goal of this chapter is to provide an overview on the recent multi-sensor fusion approaches used in Mixed Reality systems for the 3D camera tracking. We describe the main sensors used in those approaches and we detail the issues surrounding their use (calibration process, fusion strategies, etc.). We include the description of some Mixed Reality techniques developed these last years and which use multi-sensor technology. Finally, we highlight new directions and open problems in this research field

    Efficient initialization schemes for real-time 3D camera tracking using image sequences

    No full text
    International audience3D camera tracking is an important issue for many kinds of applications such as augmented reality and robotics navigation. When image sequences are used, tracking needs to be initialized by matching 2D data extracted from images with 3D data representing a priori knowledge of the world. However, this process is difficult and the accuracy of the pose estimation strongly depends on the accuracy of the previous matching step. In this paper, we present two original approaches that achieve a 2D/3D points-based matching in order to initialize the 3D camera tracking. The first one is semi-automatic and requires the intervention of the user. The second one is completely automatic. Both approaches are based on SURF descriptors, which have the advantages of being fast and robust against outliers. A description of these two approaches is given and results obtained from experiments performed on real data are exposed and discussed

    Hybrid Localization System for Mobile Outdoor Augmented Reality Applications

    No full text
    International audienceOutdoor Augmented Reality applications often combine heterogeneous sensors to recover 3D localization (position and orientation) in large environments. Indeed, accurate localization is critical to register virtual augmentations over a real scene. This paper describes a localization system composed of two parts: an Aid-localization subsystem and a Vision subsystem. The Aid-localization subsystem, composed of GPS and inertial sensor, has two functionalities: to initialize the visual tracking and to estimate the user's position and orientation when visual tracking fails. The Vision subsystem which represents the main block, allows continuously estimating the user's position and orientation using point-based visual tracking

    A GPS-IMU-Camera modelization and calibration for 3d localization dedicated to outdoor mobile applications

    No full text
    International audience3D localization is an important process for many types of applications such as augmented reality or mobile robotics. To achieve this, outdoor applications converge towards using hybrid sensor systems. The combination of different types of sensors tends to overcome the drawbacks of using a single type of sensor and thus increases the robustness and the accuracy. However, the combination of several types of sensors must deal with various problems. Among them, the calibration represents an important step. Indeed, each sensor provides measurements in its own coordinate system. So, we need to convert the measurements provided by each sensor in its own coordinate systems into a reference coordinate system. This process consists in estimating transformations that maps one coordinate system to another. The accuracy of the hybrid sensor depends on the accuracy of this procedure. In our work, we are interested in using three types of sensors: a camera, an inertial sensor (IMU) and a GPS receiver. This hybrid sensor must provide at any time the position and the orientation of the camera point of views w.r.t. the world coordinate system (camera pose). Thus, orientations provided by the IMU and GPS positions must be re-expressed in the camera coordinate system. In this paper, we propose two calibration approaches based on models associated to each pair of sensors. Some results of experiments conducted under real conditions will also be presented

    Marker Less Vision-Based Tracking of Partially Known 3D Scenes for Outdoor Augmented Reality Applications

    No full text
    International audienceThis paper presents a new robust and reliable marker less camera tracking system for outdoor augmented reality using only a mobile handheld camera. The proposed method is particularly efficient for partially known 3D scenes where only an incomplete 3D model of the outdoor environment is available. Indeed, the system combines an edge-based tracker with a sparse 3D reconstruction of the real-world environment to continually perform the camera tracking even if the model-based tracker fails. Experiments on real data were carried out and demonstrate the robustness of our approach to occlusions and scene changes

    Toward an Inertial/Vision Sensor Calibration for Outdoor Augmented Reality Applications

    No full text
    International audienceIn mobile outdoor augmented reality applications, accurate localization is critical to register virtual augmentations over a real scene. Vision-based approaches provide accurate localization estimates but still too sensitive to environmental conditions (change in brightness, sudden motion, occlusions, etc). This main drawback can be overcome by adding other types of sensors. In this paper, we combine a camera with an inertial sensor to estimate continuously the camera's orientation. The accuracy of this hybrid sensor depends on the accuracy of the calibration process that determines the relationship between the two sensors. This relationship allows expressing the orientation provided by the inertial sensor with respect to the camera coordinate system. The proposed method is simple, fast and accurate. Experimental results on real data are presented

    Outdoor augmented reality system for geological applications

    No full text
    International audienceAugmented reality has been shown to be useful in many application areas such as maintenance skills, urban planning, interior design and entertainment, etc. The development of the mobile augmented reality in these last years is due to the evolution of the technology at various levels: from sensors (GPS, inertial sensor, etc.) to mobile devices (Tablet-PC, PDA, etc.). In this paper, we present a mobile augmented reality system dedicated to outdoor applications. It encompasses a localization process based on an assistance scheme and combining several data acquired from a hybrid sensor (camera, GPS and inertial sensor). The visualization and interaction are performed using a tablet-PC. This system has been tested in an application intended to geologists in order to monitor and supervise the restoration of a castle on a long term. This paper details various components of the system, some results that were obtained and how the whole application was evaluated through end-user
    corecore