7,390 research outputs found

    PLXTRM : prediction-led eXtended-guitar tool for real-time music applications and live performance

    Get PDF
    peer reviewedThis article presents PLXTRM, a system tracking picking-hand micro-gestures for real-time music applications and live performance. PLXTRM taps into the existing gesture vocabulary of the guitar player. On the first level, PLXTRM provides a continuous controller that doesn’t require the musician to learn and integrate extrinsic gestures, avoiding additional cognitive load. Beyond the possible musical applications using this continuous control, the second aim is to harness PLXTRM’s predictive power. Using a reservoir network, string onsets are predicted within a certain time frame, based on the spatial trajectory of the guitar pick. In this time frame, manipulations to the audio signal can be introduced, prior to the string actually sounding, ’prefacing’ note onsets. Thirdly, PLXTRM facilitates the distinction of playing features such as up-strokes vs. down-strokes, string selections and the continuous velocity of gestures, and thereby explores new expressive possibilities

    Estimation of Guitar Fingering and Plucking Controls based on Multimodal Analysis of Motion, Audio and Musical Score

    Get PDF
    This work presents a method for the extraction of instrumental controls during guitar performances. The method is based on the analysis of multimodal data consisting of a combination of motion capture, audio analysis and musical score. High speed video cameras based on marker identification are used to track the position of finger bones and articulations and audio is recorded with a transducer measuring vibration on the guitar body. The extracted parameters are divided into left hand controls, i.e. fingering (which string and fret is pressed with a left hand finger) and right hand controls, i.e. the plucked string, the plucking finger and the characteristics of the pluck (position, velocity and angles with respect to the string). Controls are estimated based on probability functions of low level features, namely, the plucking instants (i.e. note onsets), the pitch and the distances of the fingers (both hands) to strings and frets. Note onsets are detected via audio analysis, the pitch is extracted from the score and distances are computed from 3D Euclidean Geometry. Results show that by combination of multimodal information, it is possible to estimate such a comprehensive set of control features, with special high performance for the fingering and plucked string estimation. Regarding the plucking finger and the pluck characteristics, their accuracy gets lower but improvements are foreseen including a hand model and the use of high-speed cameras for calibration and evaluation.A. Perez-Carrillo was supported by a Beatriu de Pinos grant 2010 BP-A 00209 by the Catalan Research Agency (AGAUR) and J. Ll. Arcos was supported by ICT -2011-8-318770 and 2009-SGR-1434 projectsPeer reviewe

    Look, Listen and Learn

    Full text link
    We consider the question: what can be learnt by looking at and listening to a large number of unlabelled videos? There is a valuable, but so far untapped, source of information contained in the video itself -- the correspondence between the visual and the audio streams, and we introduce a novel "Audio-Visual Correspondence" learning task that makes use of this. Training visual and audio networks from scratch, without any additional supervision other than the raw unconstrained videos themselves, is shown to successfully solve this task, and, more interestingly, result in good visual and audio representations. These features set the new state-of-the-art on two sound classification benchmarks, and perform on par with the state-of-the-art self-supervised approaches on ImageNet classification. We also demonstrate that the network is able to localize objects in both modalities, as well as perform fine-grained recognition tasks.Comment: Appears in: IEEE International Conference on Computer Vision (ICCV) 201

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Using Sound to Enhance Users’ Experiences of Mobile Applications

    Get PDF
    The latest smartphones with GPS, electronic compass, directional audio, touch screens etc. hold potentials for location based services that are easier to use compared to traditional tools. Rather than interpreting maps, users may focus on their activities and the environment around them. Interfaces may be designed that let users search for information by simply pointing in a direction. Database queries can be created from GPS location and compass direction data. Users can get guidance to locations through pointing gestures, spatial sound and simple graphics. This article describes two studies testing prototypic applications with multimodal user interfaces built on spatial audio, graphics and text. Tests show that users appreciated the applications for their ease of use, for being fun and effective to use and for allowing users to interact directly with the environment rather than with abstractions of the same. The multimodal user interfaces contributed significantly to the overall user experience

    hpDJ: An automated DJ with floorshow feedback

    No full text
    Many radio stations and nightclubs employ Disk-Jockeys (DJs) to provide a continuous uninterrupted stream or “mix” of dance music, built from a sequence of individual song-tracks. In the last decade, commercial pre-recorded compilation CDs of DJ mixes have become a growth market. DJs exercise skill in deciding an appropriate sequence of tracks and in mixing 'seamlessly' from one track to the next. Online access to large-scale archives of digitized music via automated music information retrieval systems offers users the possibility of discovering many songs they like, but the majority of consumers are unlikely to want to learn the DJ skills of sequencing and mixing. This paper describes hpDJ, an automatic method by which compilations of dance-music can be sequenced and seamlessly mixed by computer, with minimal user involvement. The user may specify a selection of tracks, and may give a qualitative indication of the type of mix required. The resultant mix can be presented as a continuous single digital audio file, whether for burning to CD, or for play-out from a personal playback device such as an iPod, or for play-out to rooms full of dancers in a nightclub. Results from an early version of this system have been tested on an audience of patrons in a London nightclub, with very favourable results. Subsequent to that experiment, we designed technologies which allow the hpDJ system to monitor the responses of crowds of dancers/listeners, so that hpDJ can dynamically react to those responses from the crowd. The initial intention was that hpDJ would monitor the crowd’s reaction to the song-track currently being played, and use that response to guide its selection of subsequent song-tracks tracks in the mix. In that version, it’s assumed that all the song-tracks existed in some archive or library of pre-recorded files. However, once reliable crowd-monitoring technology is available, it becomes possible to use the crowd-response data to dynamically “remix” existing song-tracks (i.e, alter the track in some way, tailoring it to the response of the crowd) and even to dynamically “compose” new song-tracks suited to that crowd. Thus, the music played by hpDJ to any particular crowd of listeners on any particular night becomes a direct function of that particular crowd’s particular responses on that particular night. On a different night, the same crowd of people might react in a different way, leading hpDJ to create different music. Thus, the music composed and played by hpDJ could be viewed as an “emergent” property of the dynamic interaction between the computer system and the crowd, and the crowd could then be viewed as having collectively collaborated on composing the music that was played on that night. This en masse collective composition raises some interesting legal issues regarding the ownership of the composition (i.e.: who, exactly, is the author of the work?), but revenue-generating businesses can nevertheless plausibly be built from such technologies

    Music Maker – A Camera-based Music Making Tool for Physical Rehabilitation

    Full text link
    The therapeutic effects of playing music are being recognized increasingly in the field of rehabilitation medicine. People with physical disabilities, however, often do not have the motor dexterity needed to play an instrument. We developed a camera-based human-computer interface called "Music Maker" to provide such people with a means to make music by performing therapeutic exercises. Music Maker uses computer vision techniques to convert the movements of a patient's body part, for example, a finger, hand, or foot, into musical and visual feedback using the open software platform EyesWeb. It can be adjusted to a patient's particular therapeutic needs and provides quantitative tools for monitoring the recovery process and assessing therapeutic outcomes. We tested the potential of Music Maker as a rehabilitation tool with six subjects who responded to or created music in various movement exercises. In these proof-of-concept experiments, Music Maker has performed reliably and shown its promise as a therapeutic device.National Science Foundation (IIS-0308213, IIS-039009, IIS-0093367, P200A01031, EIA-0202067 to M.B.); National Institutes of Health (DC-03663 to E.S.); Boston University (Dudley Allen Sargent Research Fund (to A.L.)

    Music Information Retrieval Meets Music Education

    Get PDF
    This paper addresses the use of Music Information Retrieval (MIR) techniques in music education and their integration in learning software. A general overview of systems that are either commercially available or in research stage is presented. Furthermore, three well-known MIR methods used in music learning systems and their state-of-the-art are described: music transcription, solo and accompaniment track creation, and generation of performance instructions. As a representative example of a music learning system developed within the MIR community, the Songs2See software is outlined. Finally, challenges and directions for future research are described

    From Deterministic to Generative: Multi-Modal Stochastic RNNs for Video Captioning

    Full text link
    Video captioning in essential is a complex natural process, which is affected by various uncertainties stemming from video content, subjective judgment, etc. In this paper we build on the recent progress in using encoder-decoder framework for video captioning and address what we find to be a critical deficiency of the existing methods, that most of the decoders propagate deterministic hidden states. Such complex uncertainty cannot be modeled efficiently by the deterministic models. In this paper, we propose a generative approach, referred to as multi-modal stochastic RNNs networks (MS-RNN), which models the uncertainty observed in the data using latent stochastic variables. Therefore, MS-RNN can improve the performance of video captioning, and generate multiple sentences to describe a video considering different random factors. Specifically, a multi-modal LSTM (M-LSTM) is first proposed to interact with both visual and textual features to capture a high-level representation. Then, a backward stochastic LSTM (S-LSTM) is proposed to support uncertainty propagation by introducing latent variables. Experimental results on the challenging datasets MSVD and MSR-VTT show that our proposed MS-RNN approach outperforms the state-of-the-art video captioning benchmarks
    corecore