58 research outputs found

    Supporting creative composition: the FrameWorks approach

    No full text
    We present a new system for music composition using structured sequences. FrameWorks has been developed on the basis of Task Analysis research studying composition processes and other user-centred design techniques. While the program only uses MIDI information, it can be seen as a ‘proof of concept’ for ideas generally applicable to the specification and manipulation of other music control data, be it raw audio, music notation or synthesis parameters. While this first implementation illustrates the basic premise, it already provides composers with an interesting and simple to use environment for exploring and testing musical ideas. Future research will develop the concept, in particular to enhance the scalability of the system

    FrameWorks 3D: composition in the third dimension

    No full text
    Music composition on computer is a challenging task, involving a range of data types to be managed within a single software tool. A composition typically comprises a complex arrangement of material, with many internal relationships between data in different locations - repetition, inversion, retrograde, reversal and more sophisticated transformations. The creation of such complex artefacts is labour intensive, and current systems typically place a significant cognitive burden on the composer in terms of maintaining a work as a coherent whole. FrameWorks 3D is an attempt to improve support for composition tasks within a Digital Audio Workstation (DAW) style environment via a novel three-dimensional (3D) user-interface. In addition to the standard paradigm of tracks, regions and tape recording analogy, FrameWorks displays hierarchical and transformational information in a single, fully navigable workspace. The implementation combines Java with Max/MSP to create a cross-platform, user-extensible package and will be used to assess the viability of such a tool and to develop the ideas furthe

    An Architecture For Creating Hosting Plug- Ins For Use In Digital Audio Workstations

    Get PDF
    Although modern software-based DA Ws (Digital Audio Workstations) offer the ability to interconnect with plug-in effects, they can be restrictive due to their architecture being largely based on hardware mixing desks. This is especially true when complex multi-effect sound design is required. This paper aims to demonstrate how a plug-in that can host other effects plug-ins can help improve the sound design possibilities in a DAW. This hosting plug-in allows other effects to be “inserted” at specific points in its internal signal flow. Details are given of a “proof of concept” plug-in that was created to demonstrate that it was possible to create plug-ins that can host other plug-ins, using Apple’s AU (Audio Unit) format. The proof of concept is a delay effect that allows other effects plug-ins to be inserted in either the “delay path”, “feedback path” or both. This Audio Unit has been extensively tested using different DAWs and has been found to work successfully in a variety of situations. Finally, details are given of how improvements can be made to the plug-in hosting delay

    Physically inspired interactive music machines: making contemporary composition accessible?

    Get PDF
    Much of what we might call "high-art music" occupies the difficult end of listening for contemporary audiences. Concepts such as pitch, meter and even musical instruments often have little to do with such music, where all sound is typically considered as possessing musical potential. As a result, such music can be challenging to educationalists, for students have few familiar pointers in discovering and understanding the gestures, relationships and structures in these works. This paper describes on-going projects at the University of Hertfordshire that adopt an approach of mapping interactions within visual spaces onto musical sound. These provide a causal explanation for the patterns and sequences heard, whilst incorporating web interoperability thus enabling potential for distance learning applications. While so far these have mainly driven pitch-based events using MIDI or audio files, it is hoped to extend the ideas using appropriate technology into fully developed composition tools, aiding the teaching of both appreciation/analysis and composition of contemporary music

    Analyzing auditory representations for sound classification with self-organizing neural networks

    No full text
    Three different auditory representations—Lyon’s cochlear model, Patterson’s gammatone filter bank combined with Meddis’ inner hair cell model, and mel-frequency cepstral coefficients—are analyzed in connection with self-organizing maps to evaluate their suitability for a perceptually justified classification of sounds. The self-organizing maps are trained with a uniform set of test sounds preprocessed by the auditory representations. The structure of the resulting feature maps and the trajectories of the individual sounds are visualized and compared to one another. While MFCC proved to be a very efficient representation, the gammatone model produced the most convincing results

    Singing voice separation based on non-vocal independent component subtraction and amplitude discrimination

    Get PDF
    Copyright Institute of Electronic Music and AcousticsMany applications of Music Information Retrieval can benefit from effective isolation of the music sources. Earlier work by the authors led to the development of a system that is based on Azimuth Discrimination and Resynthesis (ADRess) and can extract the singing voice from reverberant stereophonic mixtures. We propose an extension to our previous method that is not based on ADRess and exploits both channels of the stereo mix more effectively. For the evaluation of the system we use a dataset that contains songs convolved during mastering as well as the mixing process (i.e. “real-world” conditions). The metrics for objective evaluation are based on bss_eval

    Integrating musicology's heterogeneous data sources for better exploration

    No full text
    Musicologists have to consult an extraordinarily heterogeneous body of primary and secondary sources during all stages of their research. Many of these sources are now available online, but the historical dispersal of material across libraries and archives has now been replaced by segregation of data and metadata into a plethora of online repositories. This segregation hinders the intelligent manipulation of metadata, and means that extracting large tranches of basic factual information or running multi-part search queries is still enormously and needlessly time consuming. To counter this barrier to research, the “musicSpace” project is experimenting with integrating access to many of musicology’s leading data sources via a modern faceted browsing interface that utilises Semantic Web and Web2.0 technologies such as RDF and AJAX. This will make previously intractable search queries tractable, enable musicologists to use their time more efficiently, and aid the discovery of potentially significant information that users did not think to look for. This paper outlines our work to date

    A framework for the development and evaluation of graphical interpolation for synthesizer parameter mappings

    Get PDF
    This paper presents a framework that supports the development and evaluation of graphical interpolated parameter mapping for the purpose of sound design. These systems present the user with a graphical pane, usually two-dimensional, where synthesizer presets can be located. Moving an interpolation point cursor within the pane will then create new sounds by calculating new parameter values, based on the cursor position and the interpolation model used. The exploratory nature of these systems lends itself to sound design applications, which also have a highly exploratory character. However, populating the interpolation space with “known” preset sounds allows the parameter space to be constrained, reducing the design complexity otherwise associated with synthesizer-based sound design. An analysis of previous graphical interpolators is presented and from this a framework is formalized and tested to show its suitability for the evaluation of such systems. The framework has then been used to compare the functionality of a number of systems that have been previously implemented. This has led to a better understanding of the different sonic outputs that each can produce and highlighted areas for further investigation

    Using pivots to explore heterogeneous collections: A case study in musicology

    No full text
    In order to provide a better e-research environment for musicologists, the musicSpace project has partnered with musicology’s leading data publishers, aggregated and enriched their data, and developed a richly featured exploratory search interface to access the combined dataset. There have been several significant challenges to developing this service, and intensive collaboration between musicologists (the domain experts) and computer scientists (who developed the enabling technologies) was required. One challenge was the actual aggregation of the data itself, as this was supplied adhering to a wide variety of different schemas and vocabularies. Although the domain experts expended much time and effort in analysing commonalities in the data, as data sources of increasing complexity were added earlier decisions regarding the design of the aggregated schema, particularly decisions made with reference to simpler data sources, were often revisited to take account of unanticipated metadata types. Additionally, in many domains a single source may be considered to be definitive for certain types of information. In musicology, this is essentially the case with the “works lists” of composers’ musical compositions given in Grove Music Online (http://www.oxfordmusiconline.com/public/book/omo_gmo), and so for musicSpace, we have mapped all sources to the works lists from Grove for the purposes of exploration, specifically to exploit the accuracy of its metadata in respect to dates of publication, catalogue numbers, and so on. Therefore, rather than mapping all fields from Grove to a central model, it would be far quicker (in terms of development time) to create a system to “pull-in” data from other sources that are mapped directly to the Grove works lists

    A Journey in (Interpolated) Sound: Impact of Different Visualizations in Graphical Interpolators

    Get PDF
    Graphical interpolation systems provide a simple mechanism for the control of sound synthesis systems by providing a level of abstraction above the parameters of the synthesis engine, allowing users to explore different sounds without awareness of the synthesis details. While a number of graphical interpolator systems have been developed over many years, with a variety of user-interface designs, few have been subject to user-evaluations. We present the testing and evaluation of alternative visualizations for a graphical interpolator in order to establish if the visual feedback provided through the interface, aids the navigation and identification of sounds with the system. The testing took the form of comparing the users’ mouse traces, showing the journey they made through the interpolated sound space when different visual interfaces were used. Sixteen participants took part and a summary of the results is presented, showing that the visuals provide users with additional cues that lead to better interaction with the interpolator
    corecore