575 research outputs found

    Mixed Reality Browsers and Pedestrian Navigation in Augmented Cities

    No full text
    International audienceIn this paper, We use a declarative format for positional audio with synchronization between audio chunks using SMIL. This format has been specifically designed for the type of audio used in AR applications. The audio engine associated to this format is running on mobile platforms (iOS, Android). Our MRB browser called IXE use a format based on volunteered geographic information (OpenStreetMap) and OSM documents for IXE can be fully authored in side OSM editors like JOSM. This is in contrast with the other AR browsers like Layar, Juniao, Wikitude, which use a Point of Interest (POI) based format having no notion of ways. This introduces a fundamental difference and in some senses a duality relation between IXE and the other AR browsers. In IXE, Augmented Virtuality (AV) navigation along a route (composed of ways) is central and AR interaction with objects is delegated to associate 3D activities. In AR browsers, navigation along a route is delegated to associated map activities and AR interaction with objects is central. IXE supports multiple tracking technologies and therefore allows both indoor navigation in buildings and outdoor navigation at the level of sidewalks. A first android version of the IXE browser will be released at the end of 2013. Being based on volunteered geographic it will allow building accessible pedestrian networks in augmented cities

    D3.1 User expectations and cross-modal interaction

    Get PDF
    This document is deliverable D3.1 “User expectations and cross-modal in-teraction” and presents user studies to understand expectations and reac-tions to content presentation methods for mobile AR applications and rec-ommendations to realize an interface and interaction design in accordance with user needs or disabilities

    Sensory maps

    Get PDF
    Sensory maps depict the world as it is qualitatively experienced, drawing on alternative human sensory modalities to call attention to the more-than-visual sensory characteristics of place. Sensory maps combine aesthetics with empirically sensed datasets to both depict personal, temporally specified realities and to advocate the world as individually constructed. As such many sensory maps are exploratory and artistic in nature. Sensory mapping's roots can be traced to a historical desire to monitor changing urban environmental conditions and to navigational pragmatism. Historical practices that focus on sensed data have led to reforms in public hygiene, the quality of sonic environments, and human-scale urban planning that prioritizes diversity and well-being. Contemporary practitioners, concerned with the emotional, embodied, and affective aspects of cartography, utilize multiple sensory output media in addition to traditional paper and digital forms in order to draw attention to sensed, subjective characteristics and their relationship to place. Where sensory mapping has a pragmatic aim, urban psychogeographic mappings tend toward a political defamiliarization of known environments, drawing attention to emotional and affective powers of the natural, cultural and political. One prime hybrid example is Krygier's Guide Psychogéographique de OWU, the result of an improvised, multisensory project with young students

    Augmented Reality Audio Editing

    Get PDF
    International audienceThe concept of augmented reality audio (ARA) characterizes techniques where a physically real sound and voice environment is extended with virtual, geolocalized sound objects. We show that the authoring of an ARA scene can be done through an iterative process composed of two stages: in the first one the author has to move in the rendering zone to apprehend the audio spatialization and the chronology of the audio events and in the second one a textual editing of the sequencing of the sound sources and DSP acoustics parameters is done. This authoring process is based on the join use of two XML languages, OpenStreetMap for maps and A2ML for Interactive 3D Audio. A2ML being a format for a cue-oriented interactive audio system, requests for interactive audio services are done through TCDL, a Tag-based Cue Dispatching language. This separation of modeling and audio rendering is similar to what is done for the web of documents with HTML and CSS style sheets

    Adaptive soundscape design for liveable urban spaces: a hybrid methodology across environmental acoustics and sonic art

    Get PDF
    The aim of this research is to identify and implement soundscape improvement strategies in urban areas based on loudspeaker placements in the outdoor environment and the use of a computer-based system for adaptive soundscape generation, integrating sonic art practice with acoustic engineering rigour

    Autoencoding sensory substitution

    Get PDF
    Tens of millions of people live blind, and their number is ever increasing. Visual-to-auditory sensory substitution (SS) encompasses a family of cheap, generic solutions to assist the visually impaired by conveying visual information through sound. The required SS training is lengthy: months of effort is necessary to reach a practical level of adaptation. There are two reasons for the tedious training process: the elongated substituting audio signal, and the disregard for the compressive characteristics of the human hearing system. To overcome these obstacles, we developed a novel class of SS methods, by training deep recurrent autoencoders for image-to-sound conversion. We successfully trained deep learning models on different datasets to execute visual-to-auditory stimulus conversion. By constraining the visual space, we demonstrated the viability of shortened substituting audio signals, while proposing mechanisms, such as the integration of computational hearing models, to optimally convey visual features in the substituting stimulus as perceptually discernible auditory components. We tested our approach in two separate cases. In the first experiment, the author went blindfolded for 5 days, while performing SS training on hand posture discrimination. The second experiment assessed the accuracy of reaching movements towards objects on a table. In both test cases, above-chance-level accuracy was attained after a few hours of training. Our novel SS architecture broadens the horizon of rehabilitation methods engineered for the visually impaired. Further improvements on the proposed model shall yield hastened rehabilitation of the blind and a wider adaptation of SS devices as a consequence

    Mixed Reality Browsers and Pedestrian Navigation in Augmented Cities

    Get PDF
    International audienceIn this paper, We use a declarative format for positional audio with synchronization between audio chunks using SMIL. This format has been specifically designed for the type of audio used in AR applications. The audio engine associated to this format is running on mobile platforms (iOS, Android). Our MRB browser called IXE use a format based on volunteered geographic information (OpenStreetMap) and OSM documents for IXE can be fully authored in side OSM editors like JOSM. This is in contrast with the other AR browsers like Layar, Juniao, Wikitude, which use a Point of Interest (POI) based format having no notion of ways. This introduces a fundamental difference and in some senses a duality relation between IXE and the other AR browsers. In IXE, Augmented Virtuality (AV) navigation along a route (composed of ways) is central and AR interaction with objects is delegated to associate 3D activities. In AR browsers, navigation along a route is delegated to associated map activities and AR interaction with objects is central. IXE supports multiple tracking technologies and therefore allows both indoor navigation in buildings and outdoor navigation at the level of sidewalks. A first android version of the IXE browser will be released at the end of 2013. Being based on volunteered geographic it will allow building accessible pedestrian networks in augmented cities

    Multi-Sensory Interaction for Blind and Visually Impaired People

    Get PDF
    This book conveyed the visual elements of artwork to the visually impaired through various sensory elements to open a new perspective for appreciating visual artwork. In addition, the technique of expressing a color code by integrating patterns, temperatures, scents, music, and vibrations was explored, and future research topics were presented. A holistic experience using multi-sensory interaction acquired by people with visual impairment was provided to convey the meaning and contents of the work through rich multi-sensory appreciation. A method that allows people with visual impairments to engage in artwork using a variety of senses, including touch, temperature, tactile pattern, and sound, helps them to appreciate artwork at a deeper level than can be achieved with hearing or touch alone. The development of such art appreciation aids for the visually impaired will ultimately improve their cultural enjoyment and strengthen their access to culture and the arts. The development of this new concept aids ultimately expands opportunities for the non-visually impaired as well as the visually impaired to enjoy works of art and breaks down the boundaries between the disabled and the non-disabled in the field of culture and arts through continuous efforts to enhance accessibility. In addition, the developed multi-sensory expression and delivery tool can be used as an educational tool to increase product and artwork accessibility and usability through multi-modal interaction. Training the multi-sensory experiences introduced in this book may lead to more vivid visual imageries or seeing with the mind’s eye
    • …
    corecore