6 research outputs found

    Automatic recognition of Persian musical modes in audio musical signals

    Get PDF
    This research proposes new approaches for computational identification of Persian musical modes. This involves constructing a database of audio musical files and developing computer algorithms to perform a musical analysis of the samples. Essential features, the spectral average, chroma, and pitch histograms, and the use of symbolic data, are discussed and compared. A tonic detection algorithm is developed to align the feature vectors and to make the mode recognition methods independent of changes in tonality. Subsequently, a geometric distance measure, such as the Manhattan distance, which is preferred, and cross correlation, or a machine learning method (the Gaussian Mixture Models), is used to gauge similarity between a signal and a set of templates that are constructed in the training phase, in which data-driven patterns are made for each dastgĂ h (Persian mode). The effects of the following parameters are considered and assessed: the amount of training data; the parts of the frequency range to be used for training; down sampling; tone resolution (12-TET, 24-TET, 48-TET and 53-TET); the effect of using overlapping or nonoverlapping frames; and silence and high-energy suppression in pre-processing. The santur (hammered string instrument), which is extensively used in the musical database samples, is described and its physical properties are characterised; the pitch and harmonic deviations characteristic of it are measured; and the inharmonicity factor of the instrument is calculated for the first time. The results are applicable to Persian music and to other closely related musical traditions of the Mediterranean and the Near East. This approach enables content-based analyses of, and content-based searches of, musical archives. Potential applications of this research include: music information retrieval, audio snippet (thumbnailing), music archiving and access to archival content, audio compression and coding, associating of images with audio content, music transcription, music synthesis, music editors, music instruction, automatic music accompaniment, and setting new standards and symbols for musical notation

    Computational Tonality Estimation: Signal Processing and Hidden Markov Models

    Get PDF
    PhDThis thesis investigates computational musical tonality estimation from an audio signal. We present a hidden Markov model (HMM) in which relationships between chords and keys are expressed as probabilities of emitting observable chords from a hidden key sequence. The model is tested first using symbolic chord annotations as observations, and gives excellent global key recognition rates on a set of Beatles songs. The initial model is extended for audio input by using an existing chord recognition algorithm, which allows it to be tested on a much larger database. We show that a simple model of the upper partials in the signal improves percentage scores. We also present a variant of the HMM which has a continuous observation probability density, but show that the discrete version gives better performance. Then follows a detailed analysis of the effects on key estimation and computation time of changing the low level signal processing parameters. We find that much of the high frequency information can be omitted without loss of accuracy, and significant computational savings can be made by applying a threshold to the transform kernels. Results show that there is no single ideal set of parameters for all music, but that tuning the parameters can make a difference to accuracy. We discuss methods of evaluating more complex tonal changes than a single global key, and compare a metric that measures similarity to a ground truth to metrics that are rooted in music retrieval. We show that the two measures give different results, and so recommend that the choice of evaluation metric is determined by the intended application. Finally we draw together our conclusions and use them to suggest areas for continuation of this research, in the areas of tonality model development, feature extraction, evaluation methodology, and applications of computational tonality estimation.Engineering and Physical Sciences Research Council (EPSRC)

    Ein Beitrag zur tonraumbasierten Analyse und Synthese musikalischer Audiosignale

    Get PDF
    The goal of the present work is to improve the analysis and synthesis of musical audio signals by the application of tonal pitch spaces. The first part written by Gabriel Gatzsche consists of the Chapters 2 to 6. It discusses the mathematic-geometrical description of tonality on several hierarchical levels based on Fred Lerdahl’s Tonal Pitch Space, David Gatzsche’s Cadence Circle and Elaine Chew’s Spiral Array (calculation of geometric centroids within tonal pitch spaces). Using two formulas, the symmetry model generator formula and the SYM operator, it is possible 1.) to describe the emergence of the most important levels of western tonality out of an array of fifths and 2.) to generate several key related models which are centered to the corresponding symmetry tone. With that steps it becomes possible to link several existing pitch spaces into a unified framework called symmetry model. To enable also the analysis of real music signals based on pitch spaces the centroid vector within the circular pitch space is introduced. This feature vector is a low dimensional representation of important tonal properties of musical audio signals. Such properties are functional relationships, the mode, tension and relaxation or harmonic ambiguities. Furthermore the pitch class - pitch height space is introduced. This space assigns geometric positions to different octaves of a given pitch class such that ”well sounding” chords can be created by choosing a simple shaped region of the space. By transforming (rotating, translating, scaling etc.) such a region also well sounding chord transitions are generated. This leads to the development of a new musical instrument, called HarmonyPad. The HarmonyPad allows a musician to create music by interacting with pitch spaces directly. Within the second part of the dissertation consisting of the Chapters 7 to 12 Markus Mehnert investigates the applicability of the symmetry model to concrete problems of music information retrieval (MIR) particularly chord and key recognition. The state of the art in the field of key recognition focuses on the estimation of major and minor keys. Within that work a new symmetry model based algorithm is presented which exceeds the results of current algorithms clearly. Additionally a new approach is proposed which extends key recognition to the estimation of the most often used six church modes. The latter represent the character of a musical piece in a better way then the standard modes ”major” and ”minor” do. Furthermore a new benchmark is introduced which allows the comparison of the current approach with future algorithms. A new machine learning algorithm (HMM/KNN) is proposed. The new algorithm combines the approaches Hidden Markov Models and k Nearest Neighbours. In the field of chord recognition the new approach achieves better results then all of the previous algorithms. It is shown that the symmetry model feature vector leads to significant better chord recognition results then the chroma vector which represents the state of the art.Das Ziel dieser Arbeit besteht darin, Verbesserungen in der Analyse und Synthese von Audiosignalen durch Anwendung von Tonräumen zu erreichen. Im ersten Teil, der die Kapitel 2 bis 6 enthält und von Gabriel Gatzsche verfasst wurde, erfolgt die mathematisch-geometrische Beschreibung der Tonalität auf verschiedenen hierarchischen Ebenen angelehnt an Fred Lerdahls Tonal Pitch Space, David Gatzsches Kadenzkreis und Elaine Chew’s Spiral Array (Berechnung von geometrischen Schwerpunkten in Tonraummodellen). Mit Hilfe zweier Formeln, der Symmetriemodell-Generatorformel und dem SYM-Operator, wird es möglich, die Entstehung der wichtigsten Hauptebenen der abendländischen Tonalität aus einer Quintreihe zu beschreiben, verschiedene, auf eine Tonart bezogene Modelle zu erzeugen und auf den jeweiligen Symmetrieton zu zentrieren. Damit gelingt es, eine Vielzahl bereits existierender Modelle zu verbinden und in ein einheitliches als Symmetriemodell bezeichnetes Framework zu integrieren. Um auch reale Musiksignale tonraumbasiert analysieren zu können, wird mit dem Summenvektor im kreisförmigen Tonraum ein Feature-Vektor vorgestellt, der wichtige tonale Eigenschaften eines Musiksignals niedrigdimensional repräsentiert. Dazu gehören z.B. funktionstheoretische Eigenschaften, das Tongeschlecht, Spannungs- und Auflösungsbestreben oder auch harmonische Mehrdeutigkeiten. Weiterhin wird der Tonigkeits-Tonhöhenraum eingeführt, der den unterschiedlichen Oktavlagen von Tonigkeiten geometrische Positionen so zuordnet, dass durch Wahl eines Raumauschnittes „gut klingende“ Akkorde erzeugt und durch Transformation des Raumausschnittes „günstig“ ineinander übergeblendet werden können. Dies führt zur Entwicklung eines neuartigen Musikinstrumentes, das als HarmonyPad bezeichnet wird. Dieses erlaubt einem Musiker, direkt mit geometrischen Tonräumen zu interagieren und damit Musiksignale zu erzeugen. Markus Mehnert untersucht im zweiten Teil der Arbeit in den Kapiteln 7 bis 12 die Anwendbarkeit des Symmetriemodells auf konkrete Probleme des Music Information Retrieval (MIR). Hier werden sowohl die Tonart- als auch die Akkorderkennung betrachtet. Im Bereich der Tonarterkennung, die sich derzeit auf die Erkennung von Dur- und Molltonarten beschränkt, wird ein neuer Algorithmus vorgestellt, der auf dem Symmetriemodell basiert. Dieser verbessert den Stand der Technik erheblich. Darüber hinaus wird ein vollkommen neuer Ansatz vorgestellt, der die Tonarterkennung auf die sechs gebräuchlichsten Kirchentonarten erweitert, da diese besser als die reine Erkennung von Dur und Moll geeignet sind, den Charakter eines Musikstückes widerzuspiegeln. Zusätzlich wird ein neues Bewertungsmaß eingeführt, das den Vergleich mit zukünftigen Verfahren ermöglicht. Es wird ein für das MIR neues maschinelles Lernverfahren (HMM/KNN) vorgestellt, das die beiden Verfahren Hidden Markov Models und k Nearest Neighbours verknüpft. Im Bereich der Akkorderkennung werden mit diesem neuen Verfahren bessere Ergebnisse als mit allen vorhergehenden Verfahren erzielt. Dabei zeigt sich auch, dass der Merkmalsvektor des Symmetriemodells in Verbindung mit Akkorderkennung signifikant besser ist als der Chromavektor, der den Stand der Technik repräsentiert

    Exploiting prior knowledge during automatic key and chord estimation from musical audio

    Get PDF
    Chords and keys are two ways of describing music. They are exemplary of a general class of symbolic notations that musicians use to exchange information about a music piece. This information can range from simple tempo indications such as “allegro” to precise instructions for a performer of the music. Concretely, both keys and chords are timed labels that describe the harmony during certain time intervals, where harmony refers to the way music notes sound together. Chords describe the local harmony, whereas keys offer a more global overview and consequently cover a sequence of multiple chords. Common to all music notations is that certain characteristics of the music are described while others are ignored. The adopted level of detail depends on the purpose of the intended information exchange. A simple description such as “menuet”, for example, only serves to roughly describe the character of a music piece. Sheet music on the other hand contains precise information about the pitch, discretised information pertaining to timing and limited information about the timbre. Its goal is to permit a performer to recreate the music piece. Even so, the information about timing and timbre still leaves some space for interpretation by the performer. The opposite of a symbolic notation is a music recording. It stores the music in a way that allows for a perfect reproduction. The disadvantage of a music recording is that it does not allow to manipulate a single aspect of a music piece in isolation, or at least not without degrading the quality of the reproduction. For instance, it is not possible to change the instrumentation in a music recording, even though this would only require the simple change of a few symbols in a symbolic notation. Despite the fundamental differences between a music recording and a symbolic notation, the two are of course intertwined. Trained musicians can listen to a music recording (or live music) and write down a symbolic notation of the played piece. This skill allows one, in theory, to create a symbolic notation for each recording in a music collection. In practice however, this would be too labour intensive for the large collections that are available these days through online stores or streaming services. Automating the notation process is therefore a necessity, and this is exactly the subject of this thesis. More specifically, this thesis deals with the extraction of keys and chords from a music recording. A database with keys and chords opens up applications that are not possible with a database of music recordings alone. On one hand, chords can be used on their own as a compact representation of a music piece, for example to learn how to play an accompaniment for singing. On the other hand, keys and chords can also be used indirectly to accomplish another goal, such as finding similar pieces. Because music theory has been studied for centuries, a great body of knowledge about keys and chords is available. It is known that consecutive keys and chords form sequences that are all but random. People happen to have certain expectations that must be fulfilled in order to experience music as pleasant. Keys and chords are also strongly intertwined, as a given key implies that certain chords will likely occur and a set of given chords implies an encompassing key in return. Consequently, a substantial part of this thesis is concerned with the question whether musicological knowledge can be embedded in a technical framework in such a way that it helps to improve the automatic recognition of keys and chords. The technical framework adopted in this thesis is built around a hidden Markov model (HMM). This facilitates an easy separation of the different aspects involved in the automatic recognition of keys and chords. Most experiments reviewed in the thesis focus on taking into account musicological knowledge about the musical context and about the expected chord duration. Technically speaking, this involves a manipulation of the transition probabilities in the HMMs. To account for the interaction between keys and chords, every HMM state is actually representing the combination of a key and a chord label. In the first part of the thesis, a number of alternatives for modelling the context are proposed. In particular, separate key change and chord change models are defined such that they closely mirror the way musicians conceive harmony. Multiple variants are considered that differ in the size of the context that is accounted for and in the knowledge source from which they were compiled. Some models are derived from a music corpus with key and chord notations whereas others follow directly from music theory. In the second part of the thesis, the contextual models are embedded in a system for automatic key and chord estimation. The features used in that system are so-called chroma profiles, which represent the saliences of the pitch classes in the audio signal. These chroma profiles are acoustically modelled by means of templates (idealised profiles) and a distance measure. In addition to these acoustic models and the contextual models developed in the first part, durational models are also required. The latter ensure that the chord and key estimations attain specified mean durations. The resulting system is then used to conduct experiments that provide more insight into how each system component contributes to the ultimate key and chord output quality. During the experimental study, the system complexity gets gradually increased, starting from a system containing only an acoustic model of the features that gets subsequently extended, first with duration models and afterwards with contextual models. The experiments show that taking into account the mean key and mean chord duration is essential to arrive at acceptable results for both key and chord estimation. The effect of using contextual information, however, is highly variable. On one hand, the chord change model has only a limited positive impact on the chord estimation accuracy (two to three percentage points), but this impact is fairly stable across different model variants. On the other hand, the chord change model has a much larger potential to improve the key output quality (up to seventeen percentage points), but only on the condition that the variant of the model is well adapted to the tested music material. Lastly, the key change model has only a negligible influence on the system performance. In the final part of this thesis, a couple of extensions to the formerly presented system are proposed and assessed. First, the global mean chord duration is replaced by key-chord specific values, which has a positive effect on the key estimation performance. Next, the HMM system is modified such that the prior chord duration distribution is no longer a geometric distribution but one that better approximates the observed durations in an appropriate data set. This modification leads to a small improvement of the chord estimation performance, but of course, it requires the availability of a suitable data set with chord notations from which to retrieve a target durational distribution. A final experiment demonstrates that increasing the scope of the contextual model only leads to statistically insignificant improvements. On top of that, the required computational load increases greatly

    Improving supervised music classification by means of multi-objective evolutionary feature selection

    Get PDF
    In this work, several strategies are developed to reduce the impact of the two limitations of most current studies in supervised music classification: the classification rules and music features have often a low interpretability, and the evaluation of algorithms and feature subsets is almost always done with respect to only one or a few common evaluation criteria separately. Although music classification is in most cases user-centered and it is desired to understand well the properties of related music categories, many current approaches are based on low-level characteristics of the audio signal. We have designed a large set of more meaningful and interpretable high-level features, which may completely replace the baseline low-level feature set and are even capable to significantly outperform it for the categorisation into three music styles. These features provide a comprehensible insight into the properties of music genres and styles: instrumentation, moods, harmony, temporal, and melodic characteristics. A crucial advantage of audio high-level features is that they can be extracted from any digitally available music piece, independently of its popularity, availability of the corresponding score, or the Internet connection for the download of the metadata and community features, which are sometimes erroneous and incomplete. A part of high-level features, which are particularly successful for classification into genres and styles, has been developed based on the novel approach called sliding feature selection. Here, high-level features are estimated from low-level and other high-level ones during a sequence of supervised classification steps, and an integrated evolutionary feature selection helps to search for the most relevant features in each step of this sequence. Another drawback of many related state-of-the-art studies is that the algorithms and feature sets are almost always compared using only one or a few evaluation criteria separately. However, different evaluation criteria are often in conflict: an algorithm optimised only with respect to classification quality may be slow, have high storage demands, perform worse on imbalanced data, or require high user efforts for labelling of songs. The simultaneous optimisation of multiple conflicting criteria remains until now almost unexplored in music information retrieval, and it was applied for feature selection in music classification for the first time in this thesis, except for several preliminary own publications. As an exemplarily multi-objective approach for optimisation of feature selection, we simultaneously minimise the classification error and the number of features used for classification. The sets with more features lead to a higher classification quality. On the other side, the sets with fewer features and a lower classification performance may help to strongly decrease the demands for storage and computing time and to reduce the risk of too complex and overfitted classification models. Further, we describe several groups of evaluation criteria and discuss other reasonable multi-objective optimisation scenarios for music data analysis

    Audio Key Finding Using Low-Dimensional Spaces.

    No full text
    [TODO] Add abstract here
    corecore