9,870 research outputs found
Analyzing Visual Mappings of Traditional and Alternative Music Notation
In this paper, we postulate that combining the domains of information
visualization and music studies paves the ground for a more structured analysis
of the design space of music notation, enabling the creation of alternative
music notations that are tailored to different users and their tasks. Hence, we
discuss the instantiation of a design and visualization pipeline for music
notation that follows a structured approach, based on the fundamental concepts
of information and data visualization. This enables practitioners and
researchers of digital humanities and information visualization, alike, to
conceptualize, create, and analyze novel music notation methods. Based on the
analysis of relevant stakeholders and their usage of music notation as a mean
of communication, we identify a set of relevant features typically encoded in
different annotations and encodings, as used by interpreters, performers, and
readers of music. We analyze the visual mappings of musical dimensions for
varying notation methods to highlight gaps and frequent usages of encodings,
visual channels, and Gestalt laws. This detailed analysis leads us to the
conclusion that such an under-researched area in information visualization
holds the potential for fundamental research. This paper discusses possible
research opportunities, open challenges, and arguments that can be pursued in
the process of analyzing, improving, or rethinking existing music notation
systems and techniques.Comment: 5 pages including references, 3rd Workshop on Visualization for the
Digital Humanities, Vis4DH, IEEE Vis 201
Embodiment, sound and visualization : a multimodal perspective in music education
Recently, many studies have emphasized the role of body movements in
processing, sharing and giving meaning to music. At the same time, neuroscience
studies, suggest that different parts of the brain are integrated and activated by the same
stimuli: sounds, for example, can be perceived by touch and can evoke imagery, energy,
fluency and periodicity. This interaction of auditory, visual and motor senses can be
found in the verbal descriptions of music and among children during their spontaneous
games. The question to be asked is, if a more multisensory and embodied approach
could redefine some of our assumptions regarding musical education. Recent research
on embodiment and multimodal perception in instrumental teaching could suggest new
directions in musical education. Can we consider the integration between the activities
of body movement, listening, metaphor visualization, and singing, as more effective
than a disembodied and fragmented approach for the process of musical understanding
Music Information Retrieval in Live Coding: A Theoretical Framework
The work presented in this article has been partly conducted while the first author was at Georgia Tech from 2015–2017 with the support of the School of Music, the Center for Music Technology and Women in Music Tech at Georgia Tech.
Another part of this research has been conducted while the first author was at Queen Mary University of London from 2017–2019 with the support of the AudioCommons project, funded by the European Commission through the Horizon 2020 programme, research and innovation grant 688382.
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Music information retrieval (MIR) has a great potential in musical live coding because it can help the musician–programmer to make musical decisions based on audio content analysis and explore new sonorities by means of MIR techniques. The use of real-time MIR techniques can be computationally demanding and thus they have been rarely used in live coding; when they have been used, it has been with a focus on low-level feature extraction. This article surveys and discusses the potential of MIR applied to live coding at a higher musical level. We propose a conceptual framework of three categories: (1) audio repurposing, (2) audio rewiring, and (3) audio remixing. We explored the three categories in live performance through an application programming interface library written in SuperCollider, MIRLC. We found that it is still a technical challenge to use high-level features in real time, yet using rhythmic and tonal properties (midlevel features) in combination with text-based information (e.g., tags) helps to achieve a closer perceptual level centered on pitch and rhythm when using MIR in live coding. We discuss challenges and future directions of utilizing MIR approaches in the computer music field
Timbre-invariant Audio Features for Style Analysis of Classical Music
Copyright: (c) 2014 Christof WeiĂź et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
The Effect of Explicit Structure Encoding of Deep Neural Networks for Symbolic Music Generation
With recent breakthroughs in artificial neural networks, deep generative
models have become one of the leading techniques for computational creativity.
Despite very promising progress on image and short sequence generation,
symbolic music generation remains a challenging problem since the structure of
compositions are usually complicated. In this study, we attempt to solve the
melody generation problem constrained by the given chord progression. This
music meta-creation problem can also be incorporated into a plan recognition
system with user inputs and predictive structural outputs. In particular, we
explore the effect of explicit architectural encoding of musical structure via
comparing two sequential generative models: LSTM (a type of RNN) and WaveNet
(dilated temporal-CNN). As far as we know, this is the first study of applying
WaveNet to symbolic music generation, as well as the first systematic
comparison between temporal-CNN and RNN for music generation. We conduct a
survey for evaluation in our generations and implemented Variable Markov Oracle
in music pattern discovery. Experimental results show that to encode structure
more explicitly using a stack of dilated convolution layers improved the
performance significantly, and a global encoding of underlying chord
progression into the generation procedure gains even more.Comment: 8 pages, 13 figure
- …