753 research outputs found

    Signal Processing Methods for Music Synchronization, Audio Matching, and Source Separation

    Get PDF
    The field of music information retrieval (MIR) aims at developing techniques and tools for organizing, understanding, and searching multimodal information in large music collections in a robust, efficient and intelligent manner. In this context, this thesis presents novel, content-based methods for music synchronization, audio matching, and source separation. In general, music synchronization denotes a procedure which, for a given position in one representation of a piece of music, determines the corresponding position within another representation. Here, the thesis presents three complementary synchronization approaches, which improve upon previous methods in terms of robustness, reliability, and accuracy. The first approach employs a late-fusion strategy based on multiple, conceptually different alignment techniques to identify those music passages that allow for reliable alignment results. The second approach is based on the idea of employing musical structure analysis methods in the context of synchronization to derive reliable synchronization results even in the presence of structural differences between the versions to be aligned. Finally, the third approach employs several complementary strategies for increasing the accuracy and time resolution of synchronization results. Given a short query audio clip, the goal of audio matching is to automatically retrieve all musically similar excerpts in different versions and arrangements of the same underlying piece of music. In this context, chroma-based audio features are a well-established tool as they possess a high degree of invariance to variations in timbre. This thesis describes a novel procedure for making chroma features even more robust to changes in timbre while keeping their discriminative power. Here, the idea is to identify and discard timbre-related information using techniques inspired by the well-known MFCC features, which are usually employed in speech processing. Given a monaural music recording, the goal of source separation is to extract musically meaningful sound sources corresponding, for example, to a melody, an instrument, or a drum track from the recording. To facilitate this complex task, one can exploit additional information provided by a musical score. Based on this idea, this thesis presents two novel, conceptually different approaches to source separation. Using score information provided by a given MIDI file, the first approach employs a parametric model to describe a given audio recording of a piece of music. The resulting model is then used to extract sound sources as specified by the score. As a computationally less demanding and easier to implement alternative, the second approach employs the additional score information to guide a decomposition based on non-negative matrix factorization (NMF)

    Case Study ``Beatles Songs'' — What can be Learned from Unreliable Music Alignments?

    Get PDF
    As a result of massive digitization efforts and the world wide web, there is an exploding amount of available digital data describing and representing music at various semantic levels and in diverse formats. For example, in the case of the Beatles songs, there are numerous recordings including an increasing number of cover songs and arrangements as well as MIDI data and other symbolic music representations. The general goal of music synchronization is to align the multiple information sources related to a given piece of music. This becomes a difficult problem when the various representations reveal significant differences in structure and polyphony, while exhibiting various types of artifacts. In this paper, we address the issue of how music synchronization techniques are useful for automatically revealing critical passages with significant difference between the two versions to be aligned. Using the corpus of the Beatles songs as test bed, we analyze the kind of differences occurring in audio and MIDI versions available for the song

    09051 Abstracts Collection -- Knowledge representation for intelligent music processing

    Get PDF
    From the twenty-fifth to the thirtieth of January, 2009, the Dagstuhl Seminar 09051 on ``Knowledge representation for intelligent music processing\u27\u27 was held in Schloss Dagstuhl~--~Leibniz Centre for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations and demos given during the seminar as well as plenary presentations, reports of workshop discussions, results and ideas are put together in this paper. The first section describes the seminar topics and goals in general, followed by plenary `stimulus\u27 papers, followed by reports and abstracts arranged by workshop followed finally by some concluding materials providing views of both the seminar itself and also forward to the longer-term goals of the discipline. Links to extended abstracts, full papers and supporting materials are provided, if available. The organisers thank David Lewis for editing these proceedings

    Linking Sheet Music and Audio - Challenges and New Approaches

    Get PDF
    Score and audio files are the two most important ways to represent, convey, record, store, and experience music. While score describes a piece of music on an abstract level using symbols such as notes, keys, and measures, audio files allow for reproducing a specific acoustic realization of the piece. Each of these representations reflects different facets of music yielding insights into aspects ranging from structural elements (e.g., motives, themes, musical form) to specific performance aspects (e.g., artistic shaping, sound). Therefore, the simultaneous access to score and audio representations is of great importance. In this paper, we address the problem of automatically generating musically relevant linking structures between the various data sources that are available for a given piece of music. In particular, we discuss the task of sheet music-audio synchronization with the aim to link regions in images of scanned scores to musically corresponding sections in an audio recording of the same piece. Such linking structures form the basis for novel interfaces that allow users to access and explore multimodal sources of music within a single framework. As our main contributions, we give an overview of the state-of-the-art for this kind of synchronization task, we present some novel approaches, and indicate future research directions. In particular, we address problems that arise in the presence of structural differences and discuss challenges when applying optical music recognition to complex orchestral scores. Finally, potential applications of the synchronization results are presented

    Initial synchronisation of wideband and UWB direct sequence systems: single- and multiple-antenna aided solutions

    No full text
    This survey guides the reader through the open literature on the principle of initial synchronisation in single-antenna-assisted single- and multi-carrier Code Division Multiple Access (CDMA) as well as Direct Sequence-Ultra WideBand (DS-UWB) systems, with special emphasis on the DownLink (DL). There is a paucity of up-to-date surveys and review articles on initial synchronization solutions for MIMO-aided and cooperative systems - even though there is a plethora of papers on both MIMOs and on cooperative systems, which assume perfect synchronization. Hence this paper aims to ?ll the related gap in the literature

    SISTEMI PER LA MOBILITÀ DEGLI UTENTI E DEGLI APPLICATIVI IN RETI WIRED E WIRELESS

    Get PDF
    The words mobility and network are found together in many contexts. The issue alone of modeling geographical user mobility in wireless networks has countless applications. Depending on one’s background, the concept is investigated with very different tools and aims. Moreover, the last decade saw also a growing interest in code mobility, i.e. the possibility for soft-ware applications (or parts thereof) to migrate and keeps working in different devices and environ-ments. A notable real-life and successful application is distributed computing, which under certain hypothesis can void the need of expensive supercomputers. The general rationale is splitting a very demanding computing task into a large number of independent sub-problems, each addressable by limited-power machines, weakly connected (typically through the Internet, the quintessence of a wired network). Following this lines of thought, we organized this thesis in two distinct and independent parts: Part I It deals with audio fingerprinting, and a special emphasis is put on the application of broadcast mon-itoring and on the implementation aspects. Although the problem is tackled from many sides, one of the most prominent difficulties is the high computing power required for the task. We thus devised and operated a distributed-computing solution, which is described in detail. Tests were conducted on the computing cluster available at the Department of Engineering of the University of Ferrara. Part II It focuses instead on wireless networks. Even if the approach is quite general, the stress is on WiFi networks. More specifically, we tried to evaluate how mobile-users’ experience can be improved. Two tools are considered. In the first place, we wrote a packet-level simulator and used it to esti-mate the impact of pricing strategies in allocating the bandwidth resource, finding out the need for such solutions. Secondly, we developed a high-level simulator that strongly advises to deepen the topic of user cooperation for the selection of the “best” point of access, when many are available. We also propose one such policy

    Interactive real-time musical systems

    Get PDF
    PhDThis thesis focuses on the development of automatic accompaniment systems. We investigate previous systems and look at a range of approaches that have been attempted for the problem of beat tracking. Most beat trackers are intended for the purposes of music information retrieval where a `black box' approach is tested on a wide variety of music genres. We highlight some of the diffculties facing offline beat trackers and design a new approach for the problem of real-time drum tracking, developing a system, B-Keeper, which makes reasonable assumptions on the nature of the signal and is provided with useful prior knowledge. Having developed the system with offline studio recordings, we look to test the system with human players. Existing offline evaluation methods seem less suitable for a performance system, since we also wish to evaluate the interaction between musician and machine. Although statistical data may reveal quantifiable measurements of the system's predictions and behaviour, we also want to test how well it functions within the context of a live performance. To do so, we devise an evaluation strategy to contrast a machine-controlled accompaniment with one controlled by a human. We also present recent work on a real-time multiple pitch tracking, which is then extended to provide automatic accompaniment for harmonic instruments such as guitar. By aligning salient notes in the output from a dual pitch tracking process, we make changes to the tempo of the accompaniment in order to align it with a live stream. By demonstrating the system's ability to align offline tracks, we can show that under restricted initial conditions, the algorithm works well as an alignment tool
    corecore