10,817 research outputs found

    Software Defined Media: Virtualization of Audio-Visual Services

    Full text link
    Internet-native audio-visual services are witnessing rapid development. Among these services, object-based audio-visual services are gaining importance. In 2014, we established the Software Defined Media (SDM) consortium to target new research areas and markets involving object-based digital media and Internet-by-design audio-visual environments. In this paper, we introduce the SDM architecture that virtualizes networked audio-visual services along with the development of smart buildings and smart cities using Internet of Things (IoT) devices and smart building facilities. Moreover, we design the SDM architecture as a layered architecture to promote the development of innovative applications on the basis of rapid advancements in software-defined networking (SDN). Then, we implement a prototype system based on the architecture, present the system at an exhibition, and provide it as an SDM API to application developers at hackathons. Various types of applications are developed using the API at these events. An evaluation of SDM API access shows that the prototype SDM platform effectively provides 3D audio reproducibility and interactiveness for SDM applications.Comment: IEEE International Conference on Communications (ICC2017), Paris, France, 21-25 May 201

    Using immersive audio and vibration to enhance remote diagnosis of mechanical failure in uncrewed vessels.

    Get PDF
    There is increasing interest in the maritime industry in the potential use of uncrewed vessels to improve the efficiency and safety of maritime operations. This leads to a number of questions relating to the maintenance and repair of mechanical systems, in particular, critical propulsion systems which if a failure occurs could endanger the vessel. While control data is commonly monitored remotely, engineers on board ship also employ a wide variety of sensory feedback such as sound and vibration to diagnose the condition of systems, and these are often not replicated in remote monitoring. In order to assess the potential for enhancement of remote monitoring and diagnosis, this project simulated an engine room (ER) based on a real vessel in Unreal Engine 4 for the HTC ViveTM VR headset. Audio was recorded from the vessel, with mechanical faults synthesized to create a range of simulated failures. In order to simulate operational requirements, the system was remotely fed data from an external server. The system allowed users to view normal control room data, listen to the overall sound of the space presented spatially over loudspeakers, isolate the sound of particular machinery components, and feel the vibration of machinery through a body worn vibration transducer. Users could scroll through a 10-hour time history of system performance, including audio, vibration and data for snapshots at hourly intervals. Seven experienced marine engineers were asked to assess several scenarios for potential faults in different elements of the ER. They were assessed both quantitatively regarding correct fault identification, and qualitatively in order to assess their perception of usability of the system. Users were able to diagnose simulated mechanical failures with a high degree of accuracy, mainly utilising audio and vibration stimuli, and reported specifically that the immersive audio and vibration improved realism and increased their ability to diagnose system failures from a remote location

    A framework for realistic 3D tele-immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite differ- ent from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experi- ence of talking in person. Several causes for these differences have been identified and we propose inspiring and innova- tive solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational expe- rience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic ex- periences to a multitude of users that for them will feel much more similar to having face to face meetings than the expe- rience offered by conventional teleconferencing systems

    SMART-I²: A Spatial Multi-users Audio-visual Real Time Interactive Interface

    No full text
    International audienceThe SMART-I2 aims at creating a precise and coherent virtual environment by providing users with both audio and visual accurate localization cues. It is known that for audio rendering, Wave Field Synthesis, and for visual rendering, Tracked Stereoscopy, individually permit high quality spatial immersion within an extended space. The proposed system combines these two rendering approaches through the use of a large Multi-Actuator Panel used as both a loudspeaker array and as a projection screen, considerably reducing audio-visual incoherencies. The system performance has been confirmed by an objective validation of the audio interface and a perceptual evaluation of the audio-visual rendering

    Music in Virtual Space: Theories and Techniques for Sound Spatialization and Virtual Reality-Based Stage Performance

    Get PDF
    This research explores virtual reality as a medium for live concert performance. I have realized compositions in which the individual performing on stage uses a VR head-mounted display complemented by other performance controllers to explore a composed virtual space. Movements and objects within the space are used to influence and control sound spatialization and diffusion, musical form, and sonic content. Audience members observe this in real-time, watching the performer\u27s journey through the virtual space on a screen while listening to spatialized audio on loudspeakers variable in number and position. The major artistic challenge I will explore through this activity is the relationship between virtual space and musical form. I will also explore and document the technical challenges of this activity, resulting in a shareable software tool called the Multi-source Ambisonic Spatialization Interface (MASI), which is useful in creating a bridge between VR technologies and associated software, ambisonic spatialization techniques, sound synthesis, and audio playback and effects, and establishes a unique workflow for working with sound in virtual space

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Auralization of Air Vehicle Noise for Community Noise Assessment

    Get PDF
    This paper serves as an introduction to air vehicle noise auralization and documents the current state-of-the-art. Auralization of flyover noise considers the source, path, and receiver as part of a time marching simulation. Two approaches are offered; a time domain approach performs synthesis followed by propagation, while a frequency domain approach performs propagation followed by synthesis. Source noise description methods are offered for isolated and installed propulsion system and airframe noise sources for a wide range of air vehicles. Methods for synthesis of broadband, discrete tones, steady and unsteady periodic, and a periodic sources are presented, and propagation methods and receiver considerations are discussed. Auralizations applied to vehicles ranging from large transport aircraft to small unmanned aerial systems demonstrate current capabilities

    Life-Sized Audiovisual Spatial Social Scenes with Multiple Characters: MARC & SMART-I²

    No full text
    International audienceWith the increasing use of virtual characters in virtual and mixed reality settings, the coordination of realism in audiovisual rendering and expressive virtual characters becomes a key issue. In this paper we introduce a new system combining two systems for tackling the issue of realism and high quality in audiovisual rendering and life-sized expressive characters. The goal of the resulting SMART-MARC platform is to investigate the impact of realism on multiple levels: spatial audiovisual rendering of a scene, appearance and expressive behaviors of virtual characters. Potential interactive applications include mediated communication in virtual worlds, therapy, game, arts and elearning. Future experimental studies will focus on 3D audio/visual coherence, social perception and ecologically valid interaction scenes
    corecore