13,904 research outputs found

    Algorithmic Clustering of Music

    Full text link
    We present a fully automatic method for music classification, based only on compression of strings that represent the music pieces. The method uses no background knowledge about music whatsoever: it is completely general and can, without change, be used in different areas like linguistic classification and genomics. It is based on an ideal theory of the information content in individual objects (Kolmogorov complexity), information distance, and a universal similarity metric. Experiments show that the method distinguishes reasonably well between various musical genres and can even cluster pieces by composer.Comment: 17 pages, 11 figure

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    Topology of Networks in Generalized Musical Spaces

    Get PDF
    The abstraction of musical structures (notes, melodies, chords, harmonic or rhythmic progressions, etc.) as mathematical objects in a geometrical space is one of the great accomplishments of contemporary music theory. Building on this foundation, I generalize the concept of musical spaces as networks and derive functional principles of compositional design by the direct analysis of the network topology. This approach provides a novel framework for the analysis and quantification of similarity of musical objects and structures, and suggests a way to relate such measures to the human perception of different musical entities. Finally, the analysis of a single work or a corpus of compositions as complex networks provides alternative ways of interpreting the compositional process of a composer by quantifying emergent behaviors with well-established statistical mechanics techniques. Interpreting the latter as probabilistic randomness in the network, I develop novel compositional design frameworks that are central to my own artistic research

    Deep Learning vs Markov Model in Music Generation

    Get PDF
    Artificial intelligence is one of the fastest growing fields at the moment in Computer Science. This is mainly due to the recent advances in machine learning and deep learning algorithms. As a result of these advances, deep learning has been used extensively in applications related to computerized audio/music generation. The main body of this thesis is an experiment. This experiment was based on a similar experiment done by Mike Kayser of Stanford University in 2013 for his thesis “Generative Models of Music” where he used Hidden Markov Models and tested the quality/accuracy of the music he generated using a music composer classifier. The experiment involves creating Markov models for music generation and then creating new models that use deep learning algorithms. These models were trained on midi files of piano music from various composers and were used to generate new music in a similar style to the composer it was trained on. In order to compare the results of these models quantitatively, the music generated by these models was passed to a classifier in order to see which technique create a model that makes music that is correctly classified as being from the composer they were trained on. The results of this experiment showed that the classifier was able to more accurately label music generated by the deep learning model than the Markov model as being from the composer the model was trained on

    On the Modeling of Musical Solos as Complex Networks

    Full text link
    Notes in a musical piece are building blocks employed in non-random ways to create melodies. It is the "interaction" among a limited amount of notes that allows constructing the variety of musical compositions that have been written in centuries and within different cultures. Networks are a modeling tool that is commonly employed to represent a set of entities interacting in some way. Thus, notes composing a melody can be seen as nodes of a network that are connected whenever these are played in sequence. The outcome of such a process results in a directed graph. By using complex network theory, some main metrics of musical graphs can be measured, which characterize the related musical pieces. In this paper, we define a framework to represent melodies as networks. Then, we provide an analysis on a set of guitar solos performed by main musicians. Results of this study indicate that the presented model can have an impact on audio and multimedia applications such as music classification, identification, e-learning, automatic music generation, multimedia entertainment.Comment: to appear in Information Science, Elsevier. Please cite the paper including such information. arXiv admin note: text overlap with arXiv:1603.0497
    corecore