191,311 research outputs found
Methoden zur Analyse der vokalen Gestaltung populärer Musik
Although voice and singing play a crucial role in many genres of popular music, to date there are only few approaches to an in-depth exploration of vocal expression. The paper aims at presenting new ways for describing, analysing and visualizing several aspects of singing using computer-based tools. After outlining a theoretical framework for the study of voice and singing in popular music, some of those tools are introduced and exemplified by vocal recordings from various genres (blues, gospel music, country music, jazz). Firstly, pitch gliding (slurs, slides, bends, melismas) and vibrato are discussed referring to a computer-based visualization of pitch contour. Secondly, vocal timbre and phonation (e.g. vocal roughness) are explored and visualized using spectrograms
CAIRSS for Music in arts medicine.
CAIRSS (Computer-Assisted Information Retrieval Service System) for Music is a bibliographic database for music research literature. Researchers and practitioners from any location in the world now have online access to, and are able to do computerized searches from, thousands of periodical articles from hundreds of journals published worldwide. CAIRSS is available through the Institute for Music Research (IMR) at the University of Texas at San Antonio. Before describing CAIRSS in detail, we first would like to present its evolutionary history
A methodological approach for algorithmic composition systems' parameter spaces aesthetic exploration
Algorithmic composition is the process of creating musical material by means of formal methods. As a consequence of its design, algorithmic composition systems are (explicitly or implicitly) described in terms of parameters. Thus, parameter space exploration plays a key role in learning the system's capabilities. However, in the computer music field, this task has received little attention. This is due in part, because the produced changes on the human perception of the outputs, as a response to changes on the parameters, could be highly nonlinear, therefore models with strongly predictable outputs are needed. The present work describes a methodology for the human perceptual (or aesthetic) exploration of generative systems' parameter spaces. As the systems' outputs are intended to produce an aesthetic experience on humans, audition plays a central role in the process. The methodology starts from a set of parameter combinations which are perceptually evaluated by the user. The sampling process of such combinations depends on the system under study and possible on heuristic considerations. The evaluated set is processed by a compaction algorithm able to generate linguistic rules describing the distinct perceptions (classes) of the user evaluation. The semantic level of the extracted rules allows for interpretability, while showing great potential in describing high and low-level musical entities. As the resulting rules represent discrete points in the parameter space, further possible extensions for interpolation between points are also discussed. Finally, some practical implementations and paths for further research are presented.Peer ReviewedPostprint (author's final draft
AffectMachine-Classical: A novel system for generating affective classical music
This work introduces a new music generation system, called
AffectMachine-Classical, that is capable of generating affective Classic music
in real-time. AffectMachine was designed to be incorporated into biofeedback
systems (such as brain-computer-interfaces) to help users become aware of, and
ultimately mediate, their own dynamic affective states. That is, this system
was developed for music-based MedTech to support real-time emotion
self-regulation in users. We provide an overview of the rule-based,
probabilistic system architecture, describing the main aspects of the system
and how they are novel. We then present the results of a listener study that
was conducted to validate the ability of the system to reliably convey target
emotions to listeners. The findings indicate that AffectMachine-Classical is
very effective in communicating various levels of Arousal () to
listeners, and is also quite convincing in terms of Valence (R^2 = .90). Future
work will embed AffectMachine-Classical into biofeedback systems, to leverage
the efficacy of the affective music for emotional well-being in listeners.Comment: K. Agres and A. Dash share first authorshi
Recommended from our members
Recent Developments at the Columbia University Computer Music Center
Columbia University has had a long involvement with music technology, establishing one of the first, if not the first, research/music centers devoted to electronic music in the United States. Officially recognized in the late 1950s as the Columbia-Princeton Electronic Music Center, the EMC was a hotbed of musico-technological work in the ensuing decades. A few years ago I became Director of the Center-its new advisory board comprising Fred Lerdahl, Tristan Murail, and myself. We managed to secure a sizable boost in funding from the Columbia University Administration and from several external sources, and with this influx of new support we decided to rebuild a number of our studios and to undertake a major overhaul and revamping of the Center's facilities. We also decided to rethink the operation of the Center, seeking to renew the status it enjoyed for decades as an advanced and progressive workplace for musicians and researchers who use new music technologies. At that time we officially changed the name from the Electronic Music Center to the Computer Music Center (CMC) to better reflect the new organizational structure as well as the renewed research/music focus. Dan Thompson asked if I would write a description of some of the changes that have taken place at the CMC for Current Musicology. Rather than merely describing hardware and software projects, I thought it might be more interesting for me to try to articulate my version of the philosophy driving what we now do at the CMC. The operation of the CMC is indeed a personal odyssey for everyone involved. What I describe is truly my own version of how the Center is, and it may or may not reflect the actual reality of the CMC
The Power of Sound Design in a Moving Picture: an Empirical Study with emoTouch for iPad
The art of sound design for a moving picture rests basically on the work experience of pragmatists. This study tries to establish some guidelines on sound design: In an experiment 240 participants gave feedback about their emotions while watching two videos, each combined with four different audio tracks – music, sound effects, full sound design (music and sound effects) and no audio (as the comparative "null" version). Each participant viewed an audiovisual combination once to prevent habituation. The lead author employed a tablet-computer with the emoTouch-application serving as a mapping tool to provide information about the emotional responses. The participants moved a marker on the tablet's touch screen in a two-dimensional rating scale describing their felt immersion and suspense. A 3-factor-ANOVA showed significant increases of the median (and maximum) values of immersion and suspense when the participants listened to music and/ or sound effects. These values were always compared to the induced emotions of the participants who watched the videos with no audio at all. The video with full sound design audio tracks increased the median immersion values up to four times and the median suspense values up to 1.4 times. The median suspense values of the video with either music or sound effects dropped by 40 percent compared to the median suspense values of the null version. In contrast, the median immersion values were increased up to 3.6 times. The findings point to the importance of sound effects in an appropriate mix with music to enhance the viewers induced immersion and suspense
Straddling the intersection
Music technology straddles the intersection between art and science and presents those who choose to work within its sphere with many practical challenges as well as creative possibilities. The paper focuses on four main areas: secondary education, higher education, practice and research and finally collaboration. The paper emphasises the importance of collaboration in tackling the challenges of interdisciplinarity and in influencing future technological developments
The Art of Engaging: Implications for Computer Music Systems
The art of engaging with computer music systems is multifaceted. This paper will provide an overview of the issues of interface between musician and computer, cognitive aspects of engagement as involvement, and metaphysical understandings of engagement as proximity. Finally, this paper will examine implications for the design of computer music systems when these issues are taken into account
Regular expressions as violin bowing patterns
String players spend a significant amount of practice time creating and learning bowings. These may be indicated in the music using up-bow and down-bow symbols, but those traditional notations do not capture the complex bowing patterns that are latent within the music. Regular expressions, a mathematical notation for a simple class of formal languages, can describe precisely the bowing patterns that commonly arise in string music. A software tool based on regular expressions enables performers to search for passages that can be handled with similar bowings, and to edit them consistently. A computer-based music editor incorporating bowing patterns has been implemented, using Lilypond to typeset the music. Our approach has been evaluated by using the editor to study ten movements from six violin sonatas by W. A. Mozart. Our experience shows that the editor is successful at finding passages and inserting bowings; that relatively complex patterns occur a number of times; and that the bowings can be inserted automatically and consistently
- …