809 research outputs found

    Virtual orchestration: a film composer's creative practice

    Get PDF
    The advent of digital technologies has led to a major change in the process of film music composition; consequent developments in music technology have forced film composers to adapt to this change. Technological innovations such as digital audio workstations (DAWs) and virtual musical instruments have made possible the creation of virtual orchestras that are technologically capable of simulating the sound and behaviour of a traditional acoustic orchestra. This has had an effect on film music production and on the creative process of the professional film composer in a way that today, creating orchestral simulations or 'mock-ups' that imitate live orchestras (or smaller ensembles) has become a requirement in the film industry and thus an essential part of the film-scoring process. In the context of contemporary film music production, this thesis investigates how orchestral simulations are composed and created using computer music technology and virtual sample-based instruments. In asking 'how', the focus is on the film composer's activities and thought processes during this creative cycle, along with the nature of the interactive relationship between composer and music materials. This study aims to show the complexity of the film composer's creative practice and to advance understanding of how the use of computer music technology and orchestral sample libraries is influencing the compositional process and compositional outcome. To address these questions, a qualitative multiple case study methodology approach was chosen that included examination of the practice of seven professional film composers working in the field of feature film as the primary valid source of data. The exploration involved semi-structured interviews with composers, observations and analysis of their studio practice and inspection of their compositional tools. Taken as a whole, the evidence provided by this study is that the process of creating orchestral simulations is a process of film music composition during which professional film composers are creating orchestral sounds through the use of computers, digital sequencing, samplers and sample-based virtual acoustic instruments for the realisation of musical works. It is a process of using and manipulating recorded samples of real acoustic instruments to generate an expressive and convincing musical performance through sample-based orchestral simulation. A characteristic of this compositional practice is that it is a continuous process that proceeds in stages over time where all procedures can be applied repeatedly between stages. The process of creating orchestral simulations for the purpose of the film score is a multifaceted compositional activity involving a complex set of relationships among different compositional states of mind and compositional activities in which film composers experience music and interact with musical materials and media in various ways. This creative activity is a process involving a single person and a mixture of various compositional tools, the composer's skills and abilities brought into existence through a creative process that requires a thorough blend of art and craft to be demonstrated at all times

    Towards the Spiritual — The Electroacoustic Music of Jonathan Harvey

    Get PDF
      &nbsp

    “You Can’t Play a Sad Song on the Banjo:” Acoustic Factors in the Judgment of Instrument Capacity to Convey Sadness

    Get PDF
    Forty-four Western-enculturated musicians completed two studies. The first group was asked to judge the relative sadness of forty-four familiar Western instruments. An independent group was asked to assess a number of acoustical properties for those same instruments. Using the estimated acoustical properties as predictor variables in a multiple regression analysis, a significant correlation was found between those properties known to contribute to sad prosody in speech and the judged sadness of the instruments. The best predictor variable was the ability of the instrument to make small pitch movements. Other variables investigated included the darkness of the timbre, the ability to play low pitches, the ability to play quietly, and the capacity of the instrument to “mumble.” Four of the acoustical factors were found to exhibit a considerable amount of shared variance, suggesting that they may originate in a common underlying factor. It is suggested that the shared proximal cause of these acoustical features may be low physical energy

    A Functional Taxonomy of Music Generation Systems

    Get PDF
    Digital advances have transformed the face of automatic music generation since its beginnings at the dawn of computing. Despite the many breakthroughs, issues such as the musical tasks targeted by different machines and the degree to which they succeed remain open questions. We present a functional taxonomy for music generation systems with reference to existing systems. The taxonomy organizes systems according to the purposes for which they were designed. It also reveals the inter-relatedness amongst the systems. This design-centered approach contrasts with predominant methods-based surveys and facilitates the identification of grand challenges to set the stage for new breakthroughs.Comment: survey, music generation, taxonomy, functional survey, survey, automatic composition, algorithmic compositio

    Blending between bassoon and horn players: an analysis of timbral adjustments during musical performance

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Achieving a blended timbre between two instruments is a common aim of orchestration. It relates to the auditory fusion of simultaneous sounds and can be linked to several acoustic factors (e.g., temporal synchrony, harmonicity, spectral relationships). Previous research has left unanswered if and how musicians control these factors during performance to achieve blend. For instance, timbral adjustments could be oriented towards the leading performer. In order to study such adjustments, pairs of one bassoon and one horn player participated in a performance experiment, which involved several musical and acoustical factors. Performances were evaluated through acoustic measures and behavioral ratings, investigating differences across performer roles as leaders or followers, unison or non-unison intervals, and earlier or later segments of performances. In addition, the acoustical influence of performance room and communication impairment were also investigated. Role assignments affected spectral adjustments in that musicians acting as followers adjusted toward a `darker' timbre, i.e., realized by reducing the frequencies of the main formant or spectral centroid. Notably, these adjustments occurred together with slight reductions in sound level, although this was more apparent for horn than bassoon players. Furthermore, coordination seemed more critical in unison performances and also improved over the course of a performance. These findings compare to similar dependencies found concerning how performers coordinate their timing and suggest that performer roles also determine the nature of adjustments necessary to achieve the common aim of a blended timbre

    Algorithms and architectures for the multirate additive synthesis of musical tones

    Get PDF
    In classical Additive Synthesis (AS), the output signal is the sum of a large number of independently controllable sinusoidal partials. The advantages of AS for music synthesis are well known as is the high computational cost. This thesis is concerned with the computational optimisation of AS by multirate DSP techniques. In note-based music synthesis, the expected bounds of the frequency trajectory of each partial in a finite lifecycle tone determine critical time-invariant partial-specific sample rates which are lower than the conventional rate (in excess of 40kHz) resulting in computational savings. Scheduling and interpolation (to suppress quantisation noise) for many sample rates is required, leading to the concept of Multirate Additive Synthesis (MAS) where these overheads are minimised by synthesis filterbanks which quantise the set of available sample rates. Alternative AS optimisations are also appraised. It is shown that a hierarchical interpretation of the QMF filterbank preserves AS generality and permits efficient context-specific adaptation of computation to required note dynamics. Practical QMF implementation and the modifications necessary for MAS are discussed. QMF transition widths can be logically excluded from the MAS paradigm, at a cost. Therefore a novel filterbank is evaluated where transition widths are physically excluded. Benchmarking of a hypothetical orchestral synthesis application provides a tentative quantitative analysis of the performance improvement of MAS over AS. The mapping of MAS into VLSI is opened by a review of sine computation techniques. Then the functional specification and high-level design of a conceptual MAS Coprocessor (MASC) is developed which functions with high autonomy in a loosely-coupled master- slave configuration with a Host CPU which executes filterbanks in software. Standard hardware optimisation techniques are used, such as pipelining, based upon the principle of an application-specific memory hierarchy which maximises MASC throughput

    AEMI: The Actuated Embedded Musical Instrument

    Get PDF
    This dissertation is a combination of acoustic and electronic musical creation, acoustic instruments and digital instruments, and a combination of all of these areas. Part I is an original composition for orchestra with the new instrument set as soloist. Part II is an examination of the development and influences of creating a new electronic musical instrument. Part I is a composition for AEMI (the Actuated Embedded Musical Instrument) and orchestra, entitled “Meditation on Solids, Liquids, and Gas.” This composition is a dialogue between the orchestra and instrument, set as an exchange of ideas; sometimes ideas lead to conflict, others lead to resolution. This also serves as a way to feature some of the musical capabilities of this new instrument. Part II is an examination of AEMI and its influences. Chapter 1 includes a discussion of existing instruments whose similar features influenced the development of AEMI: the Theremin, Manta, JD-1, Buchla controller, EVI and EWI, and Chameleon Guitar. While AEMI instrument does not have the same performance mechanics as the Theremin, Evi, or Ewi, understanding the physicality issues of an instrument, like the Theremin, provided insights into creating a versatile instrument that can be easily learned yet have virtuosic character. Ultimately, embedding expressivity, such as subtlety and nuance into the instrument, would be one of the most difficult aspects of creating an instrument and would demand the largest amount of work. Chapter 2 describes the aesthetics, technical aspects, difficulties, and musical abilities of the instrument. Attempts to combine acoustic and electronic music are not novel, the incorporation of acoustically driven resonance by electronic embedded instruments is new. The electroacoustic nature of this instrument is different than most electronic instruments. The controller and user interface is electronically driven, and its speakers/acoustic drivers are embedded within the instrument. This discussion may provide insights to musicians, composers, and instrument makers involved in the finding of new avenues of musical expression
    • …
    corecore