84 research outputs found

    Creative Support Musical Composition System: a study on Multiple Viewpoints Representations in Variable Markov Oracle

    Get PDF
    Em meados do século XX, assistiu-se ao surgimento de uma área de estudo focada na geração au-tomática de conteúdo musical por meios computacionais. Os primeiros exemplos concentram-se no processamento offline de dados musicais mas, recentemente, a comunidade tem vindo a explorar maioritariamente sistemas musicais interativos e em tempo-real. Além disso, uma tendência recente enfatiza a importância da tecnologia assistiva, que promove uma abordagem centrada em escolhas do utilizador, oferecendo várias sugestões para um determinado problema criativo. Nesse contexto, a minha investigação tem como objetivo promover novas ferramentas de software para sistemas de suporte criativo, onde algoritmos podem participar colaborativamente no fluxo de composição. Em maior detalhe, procuro uma ferramenta que aprenda com dados musicais de tamanho variável para fornecer feedback em tempo real durante o processo de composição. À luz das características de multi-dimensionalidade e hierarquia presentes nas estruturas musicais, pretendo estudar as representações que abstraem os seus padrões temporais, para promover a geração de múltiplas soluções ordenadas por grau de optimização para um determinado contexto musical. Por fim, a natureza subjetiva da escolha é dada ao utilizador, ao qual é fornecido um número limitado de soluções 'ideais'. Uma representação simbólica da música manifestada como Modelos sob múltiplos pontos de vista, combinada com o autómato Variable Markov Oracle (VMO), é usada para testar a interação ideal entre a multi-dimensionalidade da representação e a idealidade do modelo VMO, fornecendo soluções coerentes, inovadoras e estilisticamente diversas. Para avaliar o sistema, foram realizados testes para validar a ferramenta num cenário especializado com alunos de composição, usando o modelo de testes do índice de suporte à criatividade.The mid-20th century witnessed the emergence of an area of study that focused on the automatic generation of musical content by computational means. Early examples focus on offline processing of musical data and recently, the community has moved towards interactive online musical systems. Furthermore, a recent trend stresses the importance of assistive technology, which pro-motes a user-in-loop approach by offering multiple suggestions to a given creative problem. In this context, my research aims to foster new software tools for creative support systems, where algorithms can collaboratively participate in the composition flow. In greater detail, I seek a tool that learns from variable-length musical data to provide real-time feedback during the composition process. In light of the multidimensional and hierarchical structure of music, I aim to study the representations which abstract its temporal patterns, to foster the generation of multiple ranked solutions to a given musical context. Ultimately, the subjective nature of the choice is given to the user to which a limited number of 'optimal' solutions are provided. A symbolic music representation manifested as Multiple Viewpoint Models combined with the Variable Markov Oracle (VMO) automaton, are used to test optimal interaction between the multi-dimensionality of the representation with the optimality of the VMO model in providing both style-coherent, novel, and diverse solutions. To evaluate the system, an experiment was conducted to validate the tool in an expert-based scenario with composition students, using the creativity support index test

    Using Multidimensional Sequences For Improvisation In The OMax Paradigm

    Get PDF
    International audienceAutomatic music improvisation systems based on the OMax paradigm use training over a one-dimensional sequence to generate original improvisations. Different systems use different heuristics to guide the improvisation but none of these benefits from training over a multidimensional sequence. We propose a system creating improvisation in a closer way to a human improviser where the intuition of a context is enriched with knowledge. This system combines a probabilistic model taking into account the multidimen-sional aspect of music trained on a corpus, with a factor oracle. The probabilistic model is constructed by interpolating sub-models and represents the knowledge of the system, while the factor oracle (structure used in OMax) represents the context. The results show the potential of such a system to perform better navigation in the factor oracle, guided by the knowledge on several dimensions

    LEARNING AND VISUALIZING MUSIC SPECIFICATIONS USING PATTERN GRAPHS

    Get PDF
    ABSTRACT We describe a system to learn and visualize specifications from song(s) in symbolic and audio formats. The core of our approach is based on a software engineering procedure called specification mining. Our procedure extracts patterns from feature vectors and uses them to build pattern graphs. The feature vectors are created by segmenting song(s) and extracting time and and frequency domain features from them, such as chromagrams, chord degree and interval classification. The pattern graphs built on these feature vectors provide the likelihood of a pattern between nodes, as well as start and ending nodes. The pattern graphs learned from a song(s) describe formal specifications that can be used for human interpretable quantitatively and qualitatively song comparison or to perform supervisory control in machine improvisation. We offer results in song summarization, song and style validation and machine improvisation with formal specifications

    DYCI2 agents: merging the "free", "reactive", and "scenario-based" music generation paradigms

    Get PDF
    International audienceThe collaborative research and development project DYCI2, Creative Dynamics of Improvised Interaction, focuses on conceiving, adapting, and bringing into play efficient models of artificial listening, learning, interaction, and generation of musical contents. It aims at developing creative and autonomous digital musical agents able to take part in various human projects in an interactive and artistically credible way; and, in the end, at contributing to the perceptive and communicational skills of embedded artificial intelligence. The concerned areas are live performance, production, pedagogy, and active listening. This paper gives an overview focusing on one of the three main research issues of this project: conceiving multi-agent architectures and models of knowledge and decision in order to explore scenarios of music co-improvisation involving human and digital agents. The objective is to merge the usually exclusive "free" , "reactive", and "scenario-based" paradigms in interactive music generation to adapt to a wide range of musical contexts involving hybrid temporality and multimodal interactions

    Artificial Intelligence Music Generators in Real Time Jazz Improvisation: a performer’s view

    Get PDF
    Μια αμφιλεγόμενη είσοδος γεννητριών μουσικής τεχνητής νοημοσύνης στον κόσμο της μουσικής σύνθεσης και ερμηνείας καλπάζει επί του παρόντος. Γόνιμη έρευνα που πηγάζει απο τομείς όπως η ανάκτηση πληροφοριών μουσικής, τα νευρονικά δίκτυα και η βαθιά μάθηση, μεταξύ άλλων, διαμορφώνει αυτό το μέλλον. Ενσωματωμένα και μη ενσωματωμένα συστήματα τεχνητής νοημοσύνης έχουν εισέλθει στον κόσμο της τζαζ προκειμένου να συνδημιουργήσουν ιδιωματικούς μουσικούς αυτοσχεδιασμούς. Αυτή η διπλωματική εξετάζει τους προκύπτοντες μελωδικούς αυτοσχεδιασμούς που παράγονται από τις γεννήτριες OMax, ImproteK και Djazz (OID) μέσω του φακού των στοιχείων της μουσικής και το κάνει από την άποψη ενός ερμηνευτή. Η ανάλυση βασίζεται κυρίως στην αξιολόγηση των ήδη δημοσιευμένων αποτελεσμάτων, καθώς και σε μια μελέτη περίπτωσης που πραγματοποίηθηκε κατά την ολοκλήρωση αυτής της εργασίας που περιλαμβάνει την απόδοση, την ακρόαση και την αξιολόγηση των παραγόμενων αυτοσχεδιασμών του OMax. Επίσης, η εργασία ασχολείται με φιλοσοφικά ζητήματα, με τα γνωστικά θεμέλια του συναισθήματος και του νοήματος και παρέχει μια ολοκληρωμένη ανάλυση της λειτουργικότητας του OID.A highly controversial entrance of Artificial Intelligence (AI) music generators in the world of music composition and performance is currently advancing. A fruitful research from Music Information Retrieval, Neural Networks and Deep Learning, among other areas, are shaping this future. Embodied and non-embodied AI systems have stepped into the world of jazz in order to co-create idiomatic music improvisations. But how musical these improvisations are? This dissertation looks at the resulted melodic improvisations produced by OMax, ImproteK and Djazz (OID) AI generators through the lens of the elements of music and it does so from a performer’s point of view. The analysis is based mainly on the evaluation of already published results as well as on a case study I carried out during the completion of this essay which includes performance, listening and evaluation of generated improvisations of OMax. The essay also reflects upon philosophical issues, cognitive foundations of emotion and meaning and provides a comprehensive analysis of the functionality of OID

    AI Methods in Algorithmic Composition: A Comprehensive Survey

    Get PDF
    Algorithmic composition is the partial or total automation of the process of music composition by using computers. Since the 1950s, different computational techniques related to Artificial Intelligence have been used for algorithmic composition, including grammatical representations, probabilistic methods, neural networks, symbolic rule-based systems, constraint programming and evolutionary algorithms. This survey aims to be a comprehensive account of research on algorithmic composition, presenting a thorough view of the field for researchers in Artificial Intelligence.This study was partially supported by a grant for the MELOMICS project (IPT-300000-2010-010) from the Spanish Ministerio de Ciencia e Innovación, and a grant for the CAUCE project (TSI-090302-2011-8) from the Spanish Ministerio de Industria, Turismo y Comercio. The first author was supported by a grant for the GENEX project (P09-TIC- 5123) from the Consejería de Innovación y Ciencia de Andalucía

    A Functional Taxonomy of Music Generation Systems

    Get PDF
    Digital advances have transformed the face of automatic music generation since its beginnings at the dawn of computing. Despite the many breakthroughs, issues such as the musical tasks targeted by different machines and the degree to which they succeed remain open questions. We present a functional taxonomy for music generation systems with reference to existing systems. The taxonomy organizes systems according to the purposes for which they were designed. It also reveals the inter-relatedness amongst the systems. This design-centered approach contrasts with predominant methods-based surveys and facilitates the identification of grand challenges to set the stage for new breakthroughs.Comment: survey, music generation, taxonomy, functional survey, survey, automatic composition, algorithmic compositio

    AN APPROACH TO MACHINE DEVELOPMENT OF MUSICAL ONTOGENY

    Get PDF
    This Thesis pursues three main objectives: (i) to use computational modelling to explore how music is perceived, cognitively processed and created by human beings; (ii) to explore interactive musical systems as a method to model and achieve the transmission of musical influence in artificial worlds and between humans and machines; and (iii) to experiment with artificial and alternative developmental musical routes in order to observe the evolution of musical styles. In order to achieve these objectives, this Thesis introduces a new paradigm for the design of computer interactive musical systems called the Ontomemetical Model of Music Evolution - OMME, which includes the fields of musical ontogenesis and memetlcs. OMME-based systems are designed to artificially explore the evolution of music centred on human perceptive and cognitive faculties. The potential of the OMME is illustrated with two interactive musical systems, the Rhythmic Meme Generator (RGeme) and the Interactive Musical Environments (iMe). which have been tested in a series of laboratory experiments and live performances. The introduction to the OMME is preceded by an extensive and critical overview of the state of the art computer models that explore musical creativity and interactivity, in addition to a systematic exposition of the major issues involved in the design and implementation of these systems. This Thesis also proposes innovative solutions for (i) the representation of musical streams based on perceptive features, (ii) music segmentation, (iii) a memory-based music model, (iv) the measure of distance between musical styles, and (v) an impi*ovisation-based creative model

    Generating structured music for bagana using quality metrics based on Markov models.

    Get PDF
    This research is partially supported by the project Lrn2Cre8 which acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FET Grant No. 610859
    corecore