1,680 research outputs found

    Synchronizing eye tracking and optical motion capture: How to bring them together

    Get PDF
    Both eye tracking and motion capture technologies are nowadays frequently used in human sciences, although both technologies are usually used separately. However, measuring both eye and body movements simultaneously would offer great potential for investigating cross- modal interaction in human (e.g. music and language-related) behavior. Here we combined an Ergoneers Dikablis head mounted eye tracker with a Qualisys Oqus optical motion cap- ture system. In order to synchronize the recordings of both devices, we developed a gener- alizable solution that does not rely on any (cost-intensive) ready-made / company-provided synchronization solution. At the beginning of each recording, the participant nods quickly while fixing on a target while keeping the eyes open – a motion yielding a sharp vertical displacement in both mocap and eye data. This displacement can be reliably detected with a peak-picking algorithm and used for accurately aligning the mocap and eye data. This method produces accurate synchronization results in the case of clean data and therefore provides an attractive alternative to costly plug-ins, as well as a solution in case ready-made synchronization solutions are unavailable

    Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder

    Full text link
    Music accompaniment generation is a crucial aspect in the composition process. Deep neural networks have made significant strides in this field, but it remains a challenge for AI to effectively incorporate human emotions to create beautiful accompaniments. Existing models struggle to effectively characterize human emotions within neural network models while composing music. To address this issue, we propose the use of an easy-to-represent emotion flow model, the Valence/Arousal Curve, which allows for the compatibility of emotional information within the model through data transformation and enhances interpretability of emotional factors by utilizing a Variational Autoencoder as the model structure. Further, we used relative self-attention to maintain the structure of the music at music phrase level and to generate a richer accompaniment when combined with the rules of music theory.Comment: Accepted By International Joint Conference on Neural Networks 2023(IJCNN2023

    Towards a Better Understanding of Emotion Communication in Music: An Interactive Production Approach.

    Get PDF
    It has been well established that composers and performers are able to encode certain emotional expressions in music, which in turn are decoded by listeners, and in general, successfully recognised. There is still much to discover, however, as to how musical cues combine to shape different emotions in the music, since previous literature has tended to focus on a limited number of cues and emotional expressions. The work in this thesis aims to investigate how combinations of tempo, articulation, pitch, dynamics, brightness, mode, and later, instrumentation, are used to shape sadness, joy, calmness, anger, fear, power, and surprise in Western tonal music. In addition, new tools for music and emotion research are presented with the aim of providing an efficient production approach to explore a large cue-emotion space in a relatively short time. To this end, a new interactive interface called EmoteControl was created which allows users to alter musical pieces in real-time through the available cues. Moreover, musical pieces were specifically composed to be used as stimuli. Empirical experiments were then carried out with the interface to determine how participants shaped different emotions in the pieces using the available cues. Specific cue combinations for the different emotions were produced. Findings revealed that overall, mode and tempo were the strongest contributors to the conveyed emotion whilst brightness was the least effective cue. However, the importance of the cues varied depending on the intended emotion. Finally, a comparative evaluation of production and traditional approaches was carried out which showed that similar results may be obtained with both. However, the production approach allowed for a larger cue-emotion space to be navigated in a shorter time. In sum, the production approach allowed participants to directly show us how they think emotional expressions should sound, and how they are shaped in music

    Multiparametric interfaces for fine-grained control of digital music

    Get PDF
    Digital technology provides a very powerful medium for musical creativity, and the way in which we interface and interact with computers has a huge bearing on our ability to realise our artistic aims. The standard input devices available for the control of digital music tools tend to afford a low quality of embodied control; they fail to realise our innate expressiveness and dexterity of motion. This thesis looks at ways of capturing more detailed and subtle motion for the control of computer music tools; it examines how this motion can be used to control music software, and evaluates musicians’ experience of using these systems. Two new musical controllers were created, based on a multiparametric paradigm where multiple, continuous, concurrent motion data streams are mapped to the control of musical parameters. The first controller, Phalanger, is a markerless video tracking system that enables the use of hand and finger motion for musical control. EchoFoam, the second system, is a malleable controller, operated through the manipulation of conductive foam. Both systems use machine learning techniques at the core of their functionality. These controllers are front ends to RECZ, a high-level mapping tool for multiparametric data streams. The development of these systems and the evaluation of musicians’ experience of their use constructs a detailed picture of multiparametric musical control. This work contributes to the developing intersection between the fields of computer music and human-computer interaction. The principal contributions are the two new musical controllers, and a set of guidelines for the design and use of multiparametric interfaces for the control of digital music. This work also acts as a case study of the application of HCI user experience evaluation methodology to musical interfaces. The results highlight important themes concerning multiparametric musical control. These include the use of metaphor and imagery, choreography and language creation, individual differences and uncontrol. They highlight how this style of interface can fit into the creative process, and advocate a pluralistic approach to the control of digital music tools where different input devices fit different creative scenarios

    From rituals to magic: Interactive art and HCI of the past, present, and future

    Get PDF
    The connection between art and technology is much tighter than is commonly recognized. The emergence of aesthetic computing in the early 2000s has brought renewed focus on this relationship. In this article, we articulate how art and Human–Computer Interaction (HCI) are compatible with each other and actually essential to advance each other in this era, by briefly addressing interconnected components in both areas—interaction, creativity, embodiment, affect, and presence. After briefly introducing the history of interactive art, we discuss how art and HCI can contribute to one another by illustrating contemporary examples of art in immersive environments, robotic art, and machine intelligence in art. Then, we identify challenges and opportunities for collaborative efforts between art and HCI. Finally, we reiterate important implications and pose future directions. This article is intended as a catalyst to facilitate discussions on the mutual benefits of working together in the art and HCI communities. It also aims to provide artists and researchers in this domain with suggestions about where to go next

    Free Jazz in the Land of Algebraic Improvisation

    Get PDF
    Abstract We discuss the connection between free-jazz music and service-oriented computing, and advance a method for formal, algebraic analysis of improvised performances; we aim for a better understanding of both the creative process of music improvising and the complexity of service-oriented systems. We formalize free-jazz performances as complex dynamic systems of services, building on the idea that an improvisation can be seen as a collection of music phase spaces that organise themselves through concept blending, and emerge as the performed music. We first define music phase spaces as specifications written over a class of logics that satisfy a set of requirements that make them suitable for dealing with improvisations. Based on these specifications we then formalize free-jazz performances as service applications that evolve by requiring other music fragments to be added as service modules to the improvisation. Finally, we present a logic for specifying free jazz based on one of Anthony Braxton's graphic notations for composition notes
    corecore