33 research outputs found

    Mixed Reality Browsers and Pedestrian Navigation in Augmented Cities

    No full text
    International audienceIn this paper, We use a declarative format for positional audio with synchronization between audio chunks using SMIL. This format has been specifically designed for the type of audio used in AR applications. The audio engine associated to this format is running on mobile platforms (iOS, Android). Our MRB browser called IXE use a format based on volunteered geographic information (OpenStreetMap) and OSM documents for IXE can be fully authored in side OSM editors like JOSM. This is in contrast with the other AR browsers like Layar, Juniao, Wikitude, which use a Point of Interest (POI) based format having no notion of ways. This introduces a fundamental difference and in some senses a duality relation between IXE and the other AR browsers. In IXE, Augmented Virtuality (AV) navigation along a route (composed of ways) is central and AR interaction with objects is delegated to associate 3D activities. In AR browsers, navigation along a route is delegated to associated map activities and AR interaction with objects is central. IXE supports multiple tracking technologies and therefore allows both indoor navigation in buildings and outdoor navigation at the level of sidewalks. A first android version of the IXE browser will be released at the end of 2013. Being based on volunteered geographic it will allow building accessible pedestrian networks in augmented cities

    Mixed Reality Browsers

    Get PDF
    International audienceThis paper focuses on Mixed Reality Browsers (MRB) that merge real and virtual worlds somewhere along the virtuality continuum which connects completely real environments to completely virtual ones. We present the audio-visual MRB developed by the WAM project-team of INRIA at Grenoble, which uses a RDF data format for POIs whose URIs refer to content expressed in HTML5 and in a declarartive data format for interactive audio

    Mixed Reality Browsers and Pedestrian Navigation in Augmented Cities

    Get PDF
    International audienceIn this paper, We use a declarative format for positional audio with synchronization between audio chunks using SMIL. This format has been specifically designed for the type of audio used in AR applications. The audio engine associated to this format is running on mobile platforms (iOS, Android). Our MRB browser called IXE use a format based on volunteered geographic information (OpenStreetMap) and OSM documents for IXE can be fully authored in side OSM editors like JOSM. This is in contrast with the other AR browsers like Layar, Juniao, Wikitude, which use a Point of Interest (POI) based format having no notion of ways. This introduces a fundamental difference and in some senses a duality relation between IXE and the other AR browsers. In IXE, Augmented Virtuality (AV) navigation along a route (composed of ways) is central and AR interaction with objects is delegated to associate 3D activities. In AR browsers, navigation along a route is delegated to associated map activities and AR interaction with objects is central. IXE supports multiple tracking technologies and therefore allows both indoor navigation in buildings and outdoor navigation at the level of sidewalks. A first android version of the IXE browser will be released at the end of 2013. Being based on volunteered geographic it will allow building accessible pedestrian networks in augmented cities

    Le language audio pour mobiles MAUDL et son moteur de rendu audio interactif IXE

    Get PDF
    Building a navigation system only based on audio guidance is a very complex task to carry out. You need to provide the user enough information to guide him without flooding him under an heavy load of sounds. Multiple kinds of guidance clues must be provided without overloading the auditive space. You also have to sort informations to give the user the most pertinent one at a given time. Finally, the system should be able to guide the user precisely in all kinds of environments. Based on these objectives, the Mobile Audio Language (MAUDL) has been defined in this research work, after a review of the limitations and problems existing with the current formats within a navigation context. Customization of the audio rendering is one aspect showcased by the usage of the different features available in MAUDL. In addition, a new sound manager named Interactive eXtensible Engine (IXE) has been developed to provide a software support to the language. It integrates all the current features of MAUDL and has been specifically designed for mobile platforms. This research report details the various problems encountered while developping such a system and the technical decisions that led to the conception of this library.La construction d'un système de navigation basé uniquement sur l'audio est une tâche complexe à mettre en œuvre : il faut pouvoir indiquer un nombre suffisant d'informations à l'utilisateur sans le noyer dans une foultitude de sons, être capable de fournir plusieurs types d'indices de guidage sans surcharger l'espace auditif, savoir trier l'information pour fournir l'information la plus pertinente à un instant donné et enfin pouvoir diriger l'usager de manière précise dans divers types d'environnements. C'est dans cette optique que le format Mobile Audio Language (MAUDL) a été défini au cours de ces travaux de recherche, après avoir abordé les limitations et problèmes posés par ce type de contexte appliqué aux formats existants. La personnalisation du rendu audio de guidage est notamment mise en avant par l'utilisation des différentes fonctionnalités du format. De plus, afin de fournir un support logiciel au format, un gestionnaire audio nommé Interactive eXtensible Engine capable d'intégrer les fonctionnalités actuelles et futures du langage a été développé pour les plateformes mobiles. Ce rapport détaille notamment les diverses limitations rencontrées et les choix techniques effectués pour la conception d'une telle librairie

    Augmented Reality Audio Editing

    Get PDF
    International audienceThe concept of augmented reality audio (ARA) characterizes techniques where a physically real sound and voice environment is extended with virtual, geolocalized sound objects. We show that the authoring of an ARA scene can be done through an iterative process composed of two stages: in the first one the author has to move in the rendering zone to apprehend the audio spatialization and the chronology of the audio events and in the second one a textual editing of the sequencing of the sound sources and DSP acoustics parameters is done. This authoring process is based on the join use of two XML languages, OpenStreetMap for maps and A2ML for Interactive 3D Audio. A2ML being a format for a cue-oriented interactive audio system, requests for interactive audio services are done through TCDL, a Tag-based Cue Dispatching language. This separation of modeling and audio rendering is similar to what is done for the web of documents with HTML and CSS style sheets

    D3.1 User expectations and cross-modal interaction

    Get PDF
    This document is deliverable D3.1 “User expectations and cross-modal in-teraction” and presents user studies to understand expectations and reac-tions to content presentation methods for mobile AR applications and rec-ommendations to realize an interface and interaction design in accordance with user needs or disabilities

    Localisation de valeurs propres et calcul de sous-espaces invariants

    No full text
    Universités : Université scientifique et médicale de Grenoble et Institut national polytechnique de Grenobl

    Basic Concepts in Augmented Reality Audio

    Get PDF
    International audienceThe basic difference between real and virtual sound environments is that virtual sounds are originating from another environment or are artificially created, whereas the real sounds are the natural existing sounds in the user's own environment. Augmented Reality Audio combines these aspects in a way where real and virtual sound scenes are mixed so that virtual sounds are perceived as an extension or a complement to the natural ones

    Basic Concepts in Augmented Reality Audio

    No full text
    International audienceThe basic difference between real and virtual sound environments is that virtual sounds are originating from another environment or are artificially created, whereas the real sounds are the natural existing sounds in the user's own environment. Augmented Reality Audio combines these aspects in a way where real and virtual sound scenes are mixed so that virtual sounds are perceived as an extension or a complement to the natural ones

    Mixed Reality Browsers

    No full text
    International audienceThis paper focuses on Mixed Reality Browsers (MRB) that merge real and virtual worlds somewhere along the virtuality continuum which connects completely real environments to completely virtual ones. We present the audio-visual MRB developed by the WAM project-team of INRIA at Grenoble, which uses a RDF data format for POIs whose URIs refer to content expressed in HTML5 and in a declarartive data format for interactive audio
    corecore