1,937 research outputs found

    Action-based effects on music perception

    Get PDF
    The classical, disembodied approach to music cognition conceptualizes action and perception as separate, peripheral processes. In contrast, embodied accounts of music cognition emphasize the central role of the close coupling of action and perception. It is a commonly established fact that perception spurs action tendencies. We present a theoretical framework that captures the ways in which the human motor system and its actions can reciprocally influence the perception of music. The cornerstone of this framework is the common coding theory, postulating a representational overlap in the brain between the planning, the execution, and the perception of movement. The integration of action and perception in so-called internal models is explained as a result of associative learning processes. Characteristic of internal models is that they allow intended or perceived sensory states to be transferred into corresponding motor commands (inverse modeling), and vice versa, to predict the sensory outcomes of planned actions (forward modeling). Embodied accounts typically refer to inverse modeling to explain action effects on music perception (Leman, 2007). We extend this account by pinpointing forward modeling as an alternative mechanism by which action can modulate perception. We provide an extensive overview of recent empirical evidence in support of this idea. Additionally, we demonstrate that motor dysfunctions can cause perceptual disabilities, supporting the main idea of the paper that the human motor system plays a functional role in auditory perception. The finding that music perception is shaped by the human motor system and its actions suggests that the musical mind is highly embodied. However, we advocate for a more radical approach to embodied (music) cognition in the sense that it needs to be considered as a dynamical process, in which aspects of action, perception, introspection, and social interaction are of crucial importance

    Extraction and representation of semantic information in digital media

    Get PDF

    SOUND SYNTHESIS WITH CELLULAR AUTOMATA

    Get PDF
    This thesis reports on new music technology research which investigates the use of cellular automata (CA) for the digital synthesis of dynamic sounds. The research addresses the problem of the sound design limitations of synthesis techniques based on CA. These limitations fundamentally stem from the unpredictable and autonomous nature of these computational models. Therefore, the aim of this thesis is to develop a sound synthesis technique based on CA capable of allowing a sound design process. A critical analysis of previous research in this area will be presented in order to justify that this problem has not been previously solved. Also, it will be discussed why this problem is worthwhile to solve. In order to achieve such aim, a novel approach is proposed which considers the output of CA as digital signals and uses DSP procedures to analyse them. This approach opens a large variety of possibilities for better understanding the self-organization process of CA with a view to identifying not only mapping possibilities for making the synthesis of sounds possible, but also control possibilities which enable a sound design process. As a result of this approach, this thesis presents a technique called Histogram Mapping Synthesis (HMS), which is based on the statistical analysis of CA evolutions by histogram measurements. HMS will be studied with four different automatons, and a considerable number of control mechanisms will be presented. These will show that HMS enables a reasonable sound design process. With these control mechanisms it is possible to design and produce in a predictable and controllable manner a variety of timbres. Some of these timbres are imitations of sounds produced by acoustic means and others are novel. All the sounds obtained present dynamic features and many of them, including some of those that are novel, retain important characteristics of sounds produced by acoustic means

    A Vision of sound: A 3D visualization of pipe organ music

    Get PDF
    My goal is to create a 3D animation that illustrates the movements and patterns produced in the air when Bach\u27s Chromatic Fugue is played on a pipe organ. By combining the visual element of swirling patterns inspired by pipe organ acoustics simulation with imagery that the music evokes in the mind, I aim to present a surrealistic soundscape that visually depicts the boundless creative energy and freedom of music and mind combined. I hope to create an animation that is aesthetically interesting and to inspire imagination in viewer

    A Parametric Sound Object Model for Sound Texture Synthesis

    Get PDF
    This thesis deals with the analysis and synthesis of sound textures based on parametric sound objects. An overview is provided about the acoustic and perceptual principles of textural acoustic scenes, and technical challenges for analysis and synthesis are considered. Four essential processing steps for sound texture analysis are identifi ed, and existing sound texture systems are reviewed, using the four-step model as a guideline. A theoretical framework for analysis and synthesis is proposed. A parametric sound object synthesis (PSOS) model is introduced, which is able to describe individual recorded sounds through a fi xed set of parameters. The model, which applies to harmonic and noisy sounds, is an extension of spectral modeling and uses spline curves to approximate spectral envelopes, as well as the evolution of parameters over time. In contrast to standard spectral modeling techniques, this representation uses the concept of objects instead of concatenated frames, and it provides a direct mapping between sounds of diff erent length. Methods for automatic and manual conversion are shown. An evaluation is presented in which the ability of the model to encode a wide range of di fferent sounds has been examined. Although there are aspects of sounds that the model cannot accurately capture, such as polyphony and certain types of fast modulation, the results indicate that high quality synthesis can be achieved for many different acoustic phenomena, including instruments and animal vocalizations. In contrast to many other forms of sound encoding, the parametric model facilitates various techniques of machine learning and intelligent processing, including sound clustering and principal component analysis. Strengths and weaknesses of the proposed method are reviewed, and possibilities for future development are discussed

    Multidimensional Data Sets: Traversing Sound Synthesis, Sound Sculpture, and Scored Composition

    Get PDF
    This article documents some of the conceptual developments of some various approaches to using multidimensional data sets as a means of propagating sound, manipulating and sculpting sound, and generating compositional scores. This is not only achieved through a methodology that is reminiscent of some of the systematic matrix procedures employed by composer Peter Maxwell Davies, but also through a generative signal path method conventionally termed Wave Terrain Synthesis. Both methodologies follow in essence the same kind of paradigm - the notion of extracting information through a process of traversing multidimensional topography. In this article we look at four documented examples. The first example is concerned with the organic morphology of modulation synthesis. The second example documents a dynamical Wave Terrain Synthesis model that responds and adapts in realtime to live audio input. The third example addresses the use of Wave Terrain Synthesis as a method of controlling another signal processing technique - in this case the independent spatial distribution of 1024 different spectral bands over a multichannel speaker array. The fourth example reflects on the use of matrices in some of the systematic compositional processes of Peter Maxwell Davies, and briefly shows how pitch, rhythm, and articulation matrices can be extended into higher-dimensional structures, and proposes how gesture can be used to create realtime generative scores. The underlying intent here is to find an effective and unified methodology for simultaneously controlling the complex parameter sets of synthesis, spatialisation, and scored composition in live realtime laptop performance

    Soft Sound: A Guide to the Tools and Techniques of Software Synthesis

    Get PDF
    “Soft Sound” is an examination of software synthesis, the act of using computer software to create audio signals from scratch. It begins with the basics of sound, a prerequisite for understanding any form of synthesis. Then, it delves into a quick history of analog and digital synthesizers before jumping into the heart of the paper: a look at the forms of synthesis that are commonly done in software rather than analog hardware. These forms include wavetable modulation, frequency modulation, and granular synthesis. Different environments for implementing such techniques will be discussed, ranging from user-friendly virtual synthesizers to low-level programming environments. The paper draws material from a variety of sources ranging from books on computers and music to manufacturer manuals and online magazines. Ultimately, it will be argued that the computer has revolutionized music creation by providing an affordable and convenient way to realize a virtually infinite number of sounds

    Proceedings of the 7th Sound and Music Computing Conference

    Get PDF
    Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010
    corecore