12 research outputs found

    Border: A Live Performance Based on Web AR and a Gesture-Controlled Virtual Instrument

    Get PDF
    International audienceRecent technological advances, such as increased CPU/GPU processing speed, along with the miniaturization of devices and sensors, have created new possibilities for integrating immersive technologies in music and performance art. Virtual and Augmented Reality (VR/AR) have become increasingly interesting as mobile device platforms, such as up-to-date smartphones, with necessary CPU resources entered the consumer market. In combination with recent web technologies, any mobile device can simply connect with a browser to a local server to access the latest technology. The web platform also eases the integration of collabora-tive situated media in participatory artwork. In this paper , we present the interactive music improvisation piece 'Border,' premiered in 2018 at the Beyond Festival at the Center for Art and Media Karlsruhe (ZKM). This piece explores the interaction between a performer and the audience using web-based applications-including AR, real-time 3D audio/video streaming, advanced web audio, and gesture-controlled virtual instruments-on smart mobile devices

    La Bibliothèque HOA, Bilan et Perspectives

    No full text
    International audienceCet article présente l’état actuel de la bibliothèque HOA en cours de développement, suite à un premier article paru dans les actes des Jim 2012. Nousprésentons en détail l’ensemble des objets. Nous précisons l’apport de la décomposition en ondes planes dans le contexte ambisonique ainsi que l’usage de la synthèse binaurale pour une ambisonie virtuelle. Enfin nous abordons la prise en main de la bibliothèque par les musiciens

    Virtual acoustic rendering by state wave synthesis

    Get PDF
    International audienceIn the context of the class of virtual acoustic simulation techniques that rely on traveling wave rendering as dictated by path-tracing methods (e.g, image-source, ray-tracing, beam-tracing) we introduce State Wave Synthesis (SWS), a novel framework for the efficient rendering of sound traveling waves as exchanged between multiple directional sound sources and multiple directional sound receivers in time-varying conditions.The proposed virtual acoustic rendering framework represents sound-emitting and sound-receiving objects as multiple-input, multiple-output dynamical systems. Each input or output corresponds to a sound traveling wave received or emitted by the object from/to different orientations or at/from different positions of the object. To allow for multiple arriving/departing waves from/to different orientations and/or positions of an object in dynamic conditions, we introduce a discrete-time state-space system formulation that allows the inputs or the outputs of a system to mutate dynamically. The SWS framework treats virtual source or receiver objects as time-varying dynamical systems in state-space modal form, each allowing for an unlimited number of sound traveling wave inputs and outputs.To model the sound emission and/or reception behavior of an object, data may be collected from measurements. These measurements, which may comprise real or virtual impulse or frequency responses from a real physical object or a numerical physical model of an object, are jointly processed to design a multiple-input, multiple-output state-space model with mutable inputs and/or outputs. This mutable state-space model enables the simulation of direction- and/or position-dependent, frequency-dependent sound wave emission or reception of the object. At run-time, each of the mutable state-space object models may present any number of inputs or outputs, with each input or output associated to a received/emitted sound traveling wave from/to specific arrival/departure position or orientation. In a first formulation, the sound wave form, the traveling of sound waves between object models is simulated by means of delay lines of time-varying length. In a second formulation, the state wave form, the traveling of sound waves between object models is simulated by way of propagating the state variables of source objects along delay lines of time-varying length. SWS allows the accurate simulation of frequency-dependent source directivity and receiver directivity in time-varying conditions without any time-domain or frequency-domain explicit convolution processing. In addition, the framework enables time-varying, obstacle-induced frequency-dependent attenuation of traveling waves without any dedicated digital filters. SWS facilitates the implementation of efficient virtual acoustic rendering engines either as software or in dedicated hardware, allowing realizations in which the number of delay lines is independent of the number of traveling wave paths being simulated. Moreover, the method enables a straightforward dynamic coupling between virtual acoustic objects and their physics-based simulation counterparts as performed by computer for animation, virtual reality, video-games, music synthesis, or other applications.In this presentation we will introduce the foundations of SWS and employ a real acoustic violin and a real human head as illustrative examples for a source object and a receiver object respectively. In light of available implementation possibilities, we will examine the basic memory requirements and computational cost of the rendering framework and suggest how to conveniently include minimum-phase diffusive elements to procure additional diffuse field contributions if necessary. Finally, we will expose limitations and discuss future opportunities for development

    Nightports at Hull Minster: Transporting a Site-Specific Musical Work Across Physical and Virtual Spaces

    Get PDF
    'Nightports at Hull Minster' is a musical project that harnesses spatialisation techniques to present music composed of the sounds of Hull Minster, UK, in both the location itself and alternative performance spaces, whilst still expressing the spatiality of the location. The root of the project is a live electronic music performance by Nightports (The Leaf Label), using only sounds recorded in the Minster itself, spatialised in real-time by another performer across a 25-loudspeaker array in situ. Three variant performance approaches are detailed that allow this original principle of spatialisation to endure in contrasting locations: a physical acousmonium in-situ; a hybrid acousmonium and virtualmonium; and headphone-targeted virtualisations for radio. The compositional and performance processes, influenced by architectural and acoustic considerations, necessitated the development of a scalable and adaptable spatialisation system by the Hull Electroacoustic Research Organisation (HEARO). Alongside the technical implementations, this paper details performance observations including the interplay between spatial dynamics, audience interaction, and sonic immersion , while also offering insights into potential refinements and advancements in the spatialisation methods

    Proceedings of the EAA Spatial Audio Signal Processing symposium: SASP 2019

    Get PDF
    International audienc

    Maynooth Musicology: Postgraduate Journal

    Get PDF
    The aim of the Maynooth Musicology: Postgraduate Journal is twofold: it compiles a selection of articles written by postgraduate students in our department each year. It also affords up to three of our postgraduate students the valuable experience of editing their first journal, drawing on our joint professional work. This volume contains thirteen essays by postgraduate students reflecting current areas of specialism in the music department. Irish musical studies are addressed in articles by Adèle Commins, Jennifer O’Connor and Lisa Parker; Schubert studies are represented by Adam Cullen; nineteenth- and twentieth-century song studies are represented by Paul Higgins, Aisling Kenny and Meng Ren and Late European Romanticism by Jennifer Lee and Emer Nestor. Gender is addressed by Jennifer Halton and essays within the area of electro-acoustic music and music technology are contributed by Brian Bridges, Brian Carty and Barbara Dignam

    Maynooth Musicology: Postgraduate Journal

    Get PDF
    The second issue of Maynooth Musicology Postgraduate Journal will be a memorable one for the student editors, and for me too as founder and general editor. Many of the young musicologists who have written these essays will embark on new journeys, leaving our department with MLitts. or PhDs, some bringing their experience at Maynooth to bear on studies further afield. It is to the students of this volume and to musicology students in general that this preface is directed, for what matters on such occasions is not so much the educational givens of your background but the state of readiness of your own spirit. In fact, the ability to start out upon your own impulse is fundamental to the gift of keeping going on your own terms, not to mention the further and more fulfilling gift of getting going all over again -never resting upon the oars of success or in the doldrums of disappointment, but getting renewed and revived by some further transformation

    Comparaison et combinaison de rendus visuels et sonores pour la conception d'interfaces homme-machine (des facteurs humains aux stratégies de présentation à base de distorsion.)

    Get PDF
    Bien que de plus en plus de données sonores et audiovisuelles soient disponibles, la majorité des interfaces qui permettent d y accéder reposent uniquement sur une présentation visuelle. De nombreuses techniques de visualisation ont déjà été proposées utilisant une présentation simultanée de plusieurs documents et des distorsions permettant de mettre en relief l information plus pertinente. Nous proposons de définir des équivalents auditifs pour la présentation de plusieurs fichiers sonores en concurrence, et de combiner de façon optimale les stratégies audio et visuelles pour la présentation de documents multimédia. Afin d adapter au mieux ces stratégies à l utilisateur, nous avons dirigé nos recherches sur l étude des processus perceptifs et attentionnels impliqués dans l écoute et l observation d objets audiovisuels concurrents, en insistant sur les interactions entre les deux modalités.Exploitant les paramètres de taille visuelle et de volume sonore, nous avons étendu le concept de lentille grossissante, utilisée dans les méthodes focus+contexte visuelles, aux modalités auditive et audiovisuelle. A partir de ce concept, une application de navigation dans une collection de documents vidéo a été développée. Nous avons comparé notre outil à un autre mode de rendu dit de Pan&Zoom à travers une étude d utilisabilité. Les résultats, en particulier subjectifs, encouragent à poursuivre vers des stratégies de présentation multimodales associant un rendu audio aux rendus visuels déjà disponibles.Une seconde étude a concerné l identification de sons d environnement en milieu bruité en présence d un contexte visuel. Le bruit simule la présence de plusieurs sources sonores simultanées telles qu on pourrait les retrouver dans une interface où les documents audio et audiovisuels sont présentés ensemble. Les résultats de cette expérience ont confirmé l avantage de la multimodalité en condition de dégradation. De plus, au-delà des buts premiers de la thèse, l étude a confirmé l importance de la congruence sémantique entre les composantes visuelle et sonore pour la reconnaissance d objets et a permis d approfondir les connaissances sur la perception auditive des sons d environnement.Finalement, nous nous sommes intéressée aux processus attentionnels impliqués dans la recherche d un objet parmi plusieurs, en particulier au phénomène de pop-out par lequel un objet saillant attire l attention automatiquement. En visuel, un objet net attire l attention au milieu d objets flous et certaines stratégies de présentation visuelle exploitent déjà ce paramètre visuel. Nous avons alors étendu la notion de flou aux modalités auditives et audiovisuelles par analogie. Une série d expériences perceptives a confirmé qu un objet net parmi des objets flous attire l attention, quelle que soit la modalité. Les processus de recherche et d identification sont alors accélérés quand l indice de netteté correspond à la cible, mais ralentis quand il s agit d un distracteur, mettant ainsi en avant un phénomène de guidage involontaire. Concernant l interaction intermodale, la combinaison redondante des flous audio et visuel s est révélée encore plus efficace qu une présentation unimodale. Les résultats indiquent aussi qu une combinaison optimale n implique pas d appliquer obligatoirement une distorsion sur les deux modalités.Although more and more sound and audiovisual data are available, the majority of access interfaces are solely based on a visual presentation. Many visualization techniques have been proposed that use simultaneous presentation of multiple documents and distortions to highlight the most relevant information. We propose to define equivalent audio technique for the presentation of several competing sound files, and optimally combine such audio and visual presentation strategies for multimedia documents. To better adapt these strategies to the user, we studied attentional and perceptual processes involved in listening and watching simultaneous audio-visual objects, focusing on the interactions between the two modalities.Combining visual size and sound level parameters, we extended the visual concept of magnifying lens to auditory and audiovisual modalities. Exploiting this concept, a navigation application in a video collection has been developed. We compared our tool with another rendering mode called Pan & Zoom through a usability study. Results, especially subjective results, encourage further research to develop multimodal presentation strategies by combining an audio rendering to the visual renderings already available.A second study concerned the identification of environmental sounds in a noisy environment in the presence of a visual context. The noise simulated the presence of multiple competing sounds as would be observed in an interface where several multimedia documents are presented together. The experimental results confirmed the multimodality advantage in condition of audio degradation. Moreover, beyond the primary goals of the thesis, this study confirms the importance of the semantic congruency between visual and auditory components for object recognition and provides deeper knowledge about the auditory perception of environmental sounds.Finally, we investigated attentional processes involved in the search of a specific object among many, especially the pop-out phenomenon whereby a salient object automatically attracts attention. In vision, an sharp object attracts attention among blurred objects and some visual strategies already exploit this parameter to display the information. We extended by analogy the concept of visual blur to auditory and audiovisual modalities. A serie of experiments confirmed that a perceptual object among blurred objects attracts attention, regardless of the modality. The identification and search process is then accelerated when the sharpness parameter is applied to the target, but slow when it is applied to a distractor. These results highlight an involuntary attraction effect. Concerning the crossmodal interaction, a redundant combination of audio and visual blur proved to be more effective than a unimodal presentation. Results also indicate that optimal combination does not necessarily require a distortion of both modalities.PARIS11-SCD-Bib. électronique (914719901) / SudocSudocFranceF
    corecore