625 research outputs found

    Augmenting Sonic Experiences Through Haptic Feedback

    Get PDF
    Sonic experiences are usually considered as the result of auditory feedback alone. From a psychological standpoint, however, this is true only when a listener is kept isolated from concurrent stimuli targeting the other senses. Such stimuli, in fact, may either interfere with the sonic experience if they distract the listener, or conversely enhance it if they convey sensations coherent with what is being heard. This chapter is concerned with haptic augmentations having effects on auditory perception, for example how different vibrotactile cues provided by an electronic musical instrument may affect its perceived sound quality or the playing experience. Results from different experiments are reviewed showing that the auditory and somatosensory channels together can produce constructive effects resulting in measurable perceptual enhancement. That may affect sonic dimensions ranging from basic auditory parameters, such as the perceived intensity of frequency components, up to more complex perceptions which contribute to forming our ecology of everyday or musical sounds

    Musical Haptics

    Get PDF
    Haptic Musical Instruments; Haptic Psychophysics; Interface Design and Evaluation; User Experience; Musical Performanc

    Musical Haptics

    Get PDF
    Haptic Musical Instruments; Haptic Psychophysics; Interface Design and Evaluation; User Experience; Musical Performanc

    Tangibility and Richness in Digital Musical Instrument Design

    Get PDF
    PhDThe sense of touch plays a fundamental role in musical performance: alongside hearing, it is the primary sensory modality used when interacting with musical instruments. Learning to play a musical instrument is one of the most developed haptic cultural practices, and within acoustic musical practice at large, the importance of touch and its close relationship to virtuosity and expression is well recognised. With digital musical instruments (DMIs) – instruments involving a combination of sensors and a digital sound engine – touch-mediated interaction remains the foremost means of control, but the interfaces of such instruments do not yet engage with the full spectrum of sensorimotor capabilities of a performer. This poses compelling questions for digital instrument design: how does the nuance and richness of physical interaction with an instrument manifest itself in the digital domain? Which design parameters are most important for haptic experience, and how do these parameters affect musical performance? Built around three practical studies which utilise DMIs as technology probes, this thesis addresses these questions from the point of view of design, of empirical musicology, and of tangible computing. In the first study musicians played a DMI with continuous pitch control and vibrotactile feedback in order to understand how dynamic tactile feedback can be implemented and how it influences musician experience and performance. The results suggest that certain vibrotactile feedback conditions can increase musicians’ tuning accuracy, but also disrupt temporal performance. The second study examines the influence of asynchronies between audio and haptic feedback. Two groups of musicians, amateurs and professional percussionists, were tasked with performing on a percussive DMI with variable action-sound latency. Differences between the two groups in terms of temporal accuracy and quality judgements illustrate the complex effects of asynchronous multimodal feedback. In the third study guitar-derivative DMIs with variable levels of control richness were observed with non-musicians and guitarists. The results from this study help clarify the relationship between tangible design factors, sensorimotor expertise and instrument behaviour. This thesis introduces a descriptive model of performer-instrument interaction, the projection model, which unites the design investigations from each study and provides a series of reflections and suggestions on the role of touch in DMI design.Doctoral Training Centre for Media and Arts Technolog

    Musical Haptics

    Get PDF
    This Open Access book offers an original interdisciplinary overview of the role of haptic feedback in musical interaction. Divided into two parts, part I examines the tactile aspects of music performance and perception, discussing how they affect user experience and performance in terms of usability, functionality and perceived quality of musical instruments. Part II presents engineering, computational, and design approaches and guidelines that have been applied to render and exploit haptic feedback in digital musical interfaces. Musical Haptics introduces an emerging field that brings together engineering, human-computer interaction, applied psychology, musical aesthetics, and music performance. The latter, defined as the complex system of sensory-motor interactions between musicians and their instruments, presents a well-defined framework in which to study basic psychophysical, perceptual, and biomechanical aspects of touch, all of which will inform the design of haptic musical interfaces. Tactile and proprioceptive cues enable embodied interaction and inform sophisticated control strategies that allow skilled musicians to achieve high performance and expressivity. The use of haptic feedback in digital musical interfaces is expected to enhance user experience and performance, improve accessibility for disabled persons, and provide an effective means for musical tuition and guidance

    Intonation of Middle School Violinists: The Roles of Pitch Discrimination and Sensorimotor Integration

    Full text link
    Intonation in string instrument performance consists of the perception of musical pitch and the motor skills necessary to produce musical pitch. Scholars in cognitive psychology have suggested that the association of perception and motor skills results in the formation of sensorimotor skills which play a key role in skilled behaviors, including music performance. The purpose of this study was to examine the extent to which pitch discrimination and sensorimotor integration explain the intonation of middle school violinists. Specific research questions were: (1) What are the correlations among pitch discrimination threshold and the following performance variables with and without auditory feedback: intonation error, intonation precision, interval error, and interval precision? (2) To what extent do pitch discrimination threshold and intonation error with masked auditory feedback explain intonation error with normal auditory feedback of middle school string players when controlling for student characteristics of grade, years of experience, private lessons, handedness, finger placement markers, weekly practice time, and school? (3) Do intonation error or interval error differ according to the left-hand finger(s) used to create the pitch(es)? Participants (N = 179) were violinists from middle schools in Michigan and Oklahoma. Each participant completed three tasks: a pitch discrimination task, a performance task, and a musical background questionnaire. In the pitch discrimination task, participants heard 16 pairs of pitches and for each pair, adjusted the second pitch to match the first. In the performance task, participants performed a 2-octave G-major scale under two conditions: normal auditory feedback and masked auditory feedback. To mask auditory feedback, participants performed while wearing noise-canceling headphones and listening to background noise. One variable, pitch discrimination threshold, was measured from the pitch discrimination task. Four variables were measured from the performance task: intonation error, intonation precision, interval error, and interval precision. The musical background questionnaire collected demographic and musical experience information. Descriptive statistics of the study variables indicated that intonation error under normal auditory feedback conditions was over three times greater than pitch discrimination threshold. Participants performed better under normal auditory feedback conditions than masked feedback conditions. Mean differences for each performance variable between the two conditions were significant but did not exceed 5 cents. Results for the research questions indicated a significant, moderate correlation between pitch discrimination threshold and intonation under normal auditory feedback conditions. Moderate positive correlations were found between intonation error and precision and between interval error and precision. A hierarchical multiple regression model revealed that intonation error under masked auditory feedback conditions was the strongest predictor of intonation error under normal auditory feedback conditions. Pitch discrimination threshold was a significant, but weaker, predictor of intonation error under normal auditory feedback conditions. Lastly, a regression model with student fixed-effects revealed that pitches performed with the second finger were significantly less accurate than those performed with the first or third fingers. Participants also performed whole steps more accurately than half steps. Collectively these results offer support for sensorimotor integration as an explanation of intonation. Suggestions for future research and implications for pedagogy are discussed.PHDMusic: Music EducationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162979/1/mkbaugh_1.pd

    Separating sound from source: sonic transformation of the violin through electrodynamic pickups and acoustic actuation

    Get PDF
    When designing an augmented acoustic instrument, it is often of interest to retain an instrument's sound quality and nuanced response while leveraging the richness of digital synthesis. Digital audio has traditionally been generated through speakers, separating sound generation from the instrument itself, or by adding an actuator within the instrument's resonating body, imparting new sounds along with the original. We offer a third option, isolating the playing interface from the actuated resonating body, allowing us to rewrite the relationship between performance action and sound result while retaining the general form and feel of the acoustic instrument. We present a hybrid acoustic-electronic violin based on a stick-body electric violin and an electrodynamic polyphonic pick-up capturing individual string displacements. A conventional violin body acts as the resonator, actuated using digitally altered audio of the string inputs. By attaching the electric violin above the body with acoustic isolation, we retain the physical playing experience of a normal violin along with some of the acoustic filtering and radiation of a traditional build. We propose the use of the hybrid instrument with digitally automated pitch and tone correction to make an easy violin for use as a potential motivational tool for beginning violinists

    Multisensory learning in adaptive interactive systems

    Get PDF
    The main purpose of my work is to investigate multisensory perceptual learning and sensory integration in the design and development of adaptive user interfaces for educational purposes. To this aim, starting from renewed understanding from neuroscience and cognitive science on multisensory perceptual learning and sensory integration, I developed a theoretical computational model for designing multimodal learning technologies that take into account these results. Main theoretical foundations of my research are multisensory perceptual learning theories and the research on sensory processing and integration, embodied cognition theories, computational models of non-verbal and emotion communication in full-body movement, and human-computer interaction models. Finally, a computational model was applied in two case studies, based on two EU ICT-H2020 Projects, "weDRAW" and "TELMI", on which I worked during the PhD

    Physical modelling meets machine learning: performing music with a virtual string ensemble

    Get PDF
    This dissertation describes a new method of computer performance of bowed string instruments (violin, viola, cello) using physical simulations and intelligent feedback control. Computer synthesis of music performed by bowed string instruments is a challenging problem. Unlike instruments whose notes originate with a single discrete excitation (e.g., piano, guitar, drum), bowed string instruments are controlled with a continuous stream of excitations (i.e. the bow scraping against the string). Most existing synthesis methods utilize recorded audio samples, which perform quite well for single-excitation instruments but not continuous-excitation instruments. This work improves the realism of synthesis of violin, viola, and cello sound by generating audio through modelling the physical behaviour of the instruments. A string's wave equation is decomposed into 40 modes of vibration, which can be acted upon by three forms of external force: A bow scraping against the string, a left-hand finger pressing down, and/or a right-hand finger plucking. The vibration of each string exerts force against the instrument bridge; these forces are summed and convolved with the instrument body impulse response to create the final audio output. In addition, right-hand haptic output is created from the force of the bow against the string. Physical constants from ten real instruments (five violins, two violas, and three cellos) were measured and used in these simulations. The physical modelling was implemented in a high-performance library capable of simulating audio on a desktop computer one hundred times faster than real-time. The program also generates animated video of the instruments being performed. To perform music with the physical models, a virtual musician interprets the musical score and generates actions which are then fed into the physical model. The resulting audio and haptic signals are examined with a support vector machine, which adjusts the bow force in order to establish and maintain a good timbre. This intelligent feedback control is trained with human input, but after the initial training is completed the virtual musician performs autonomously. A PID controller is used to adjust the position of the left-hand finger to correct any flaws in the pitch. Some performance parameters (initial bow force, force correction, and lifting factors) require an initial value for each string and musical dynamic; these are calibrated automatically using the previously-trained support vector machines. The timbre judgements are retained after each performance and are used to pre-emptively adjust bowing parameters to avoid or mitigate problematic timbre for future performances of the same music. The system is capable of playing sheet music with approximately the same ability level as a human music student after two years of training. Due to the number of instruments measured and the generality of the machine learning, music can be performed with ensembles of up to ten stringed instruments, each with a distinct timbre. This provides a baseline for future work in computer control and expressive music performance of virtual bowed string instruments
    • …
    corecore