3 research outputs found

    16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)

    Get PDF
    The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28-31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25-28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc

    iJazzARTIST: Intelligent Jazz Accompanist for Real-Time human-computer Interactive muSic improvisaTion

    Get PDF
    Κάποια από τα κυριότερα χαρακτηριστικά του αυτοσχεδιασμού σε πρότυπα τζαζ εκφράζονται μέσα από τη μουσική συνοδεία. Η συνεργασία μεταξύ ανθρώπων και τεχνητών συστημάτων για την επίτευξη αυτοσχεδιασμού σε πραγματικό χρόνο, υπό το πλαίσιο κοινής παρτιτούρας, αποτελεί ένα ιδιαίτερα ενδιαφέρον αντικείμενο μελέτης για τον τομέα της Ανάκτησης Μουσικής Πληροφορίας. Οι προϋπάρχουσες προσεγγίσεις που αφορούν στη διαδικασία της συνοδείας τζαζ αυτοσχεδιασμού, έχουν παρουσιάσει συστήματα που δε διαθέτουν την ικανότητα συμμόρφωσης με δυναμικά μεταβαλλόμενα περιβάλλοντα, εξαρτώμενα από τα αυτοσχέδια δεδομένα. Η παρούσα πτυχιακή εργασία παρουσιάζει ένα σύστημα συνοδείας, το οποίο διαθέτει την ικανότητα προσαρμογής τόσο στο τζαζ σόλο του μουσικού, όσο και τους περιορισμούς που έχουν προκαθοριστεί από την παρτιτούρα. Ο τεχνητός πράκτορας που αναπτύσσεται για το σκοπό αυτό, αποτελείται από δύο υποσυστήματα, ένα μοντέλο υπεύθυνο για την παραγωγή προβλέψεων που αφορούν το σόλο του μουσικού κι ένα δεύτερο υποσύστημα που παράγει την τελική μουσική συνοδεία, αξιοποιώντας την πληροφορία για τις προθέσεις του σολίστα που παρήγαγε το πρώτο μοντέλο. Και τα δύο προαναφερθέντα μοντέλα έχουν ως σχεδιαστική βάση τα Αναδρομικά Νευρωνικά Δίκτυα. Το σύνολο των δεδομένων που χρησιμοποιήθηκαν στην εκπαίδευση των μοντέλων υποβλήθηκαν σε επεξεργασία πολλών επιπέδων, συμπεριλαμβανομένης της πιθανολογικής βελτιστοποίησης, με στόχο τη διατήρηση και την επαύξηση της χρήσιμης πληροφορίας. Το τελικό σύστημα εξετάστηκε με τη χρήση δύο τζαζ προτύπων, παρουσιάζοντας προσαρμοστική ικανότητα ως προς τους αρμονικούς περιορισμούς, καθώς και ποικιλομορφία, εξαρτώμενη από τον αυτοσχεδιασμό του μουσικού. Τέλος, αναφέρονται κάποιες δυσκολίες που προέκυψαν, όπως επίσης και προτάσεις για περαιτέρω έρευνα.Some of the most essential characteristics of improvisation on jazz standards are reflected through the accompaniment. Given a lead sheet as common ground, the study of the collaborative process of music improvisation between a human and an artificial agent in a real time setting, is a scenario of great interest in the MIR domain. So far, the approaches concerning the jazz improvisation accompaniment procedure, have presented systems that lack the capability of performing the accompaniment generation task while at the same time adapting to dynamically variable constraints depending on new, improvised data. The thesis at hand, proposes a jazz accompaniment system capable of providing proper chord voicings to the solo, while complying with both the soloist's intentions as well as the previously defined constraints set by the lead sheet. The artificial agent consists of two sub-systems; a model responsible for predicting the human soloist's intentions and a second system performing the task of the accompaniment. The latter is achieved by modeling the artificial agent's predictions, after exploiting the information on the expectations of the human agent's intentions, previously calculated by the first model. Recurrent Neural Networks (RNNs) comprise both aforementioned models. The dataset used in the training process has undergone multi-staged processing including probabilistic refinement, aiming to keep and enrich the information which is requisite for the task. The system was tested on two cases of jazz standards, demonstrating ability of compliance with the harmonic constraints. Additionally, output variability depending on the solo improvisation has been indicated. Emerging limitations as well as potential future perspectives are discussed in the conclusion of this work

    L-Music: uma abordagem para composição musical assistida usando L-Systems

    Get PDF
    Generative music systems have been researched for an extended period of time. The scientific corpus of this research field is translating, currently, into the world of the everyday musician and composer. With these tools, the creative process of writing music can be augmented or completely replaced by machines. The work in this document aims to contribute to research in assisted music composition systems. To do so, a review on the state of the art of these fields was performed and we found that a plethora of methodologies and approaches each provide their own interesting results (to name a few, neural networks, statistical models, and formal grammars). We identified Lindenmayer Systems, or L-Systems, as the most interesting and least explored approach to develop an assisted music composition system prototype, aptly named L-Music, due to the ability of producing complex outputs from simple structures. L-Systems were initially proposed as a parallel string rewriting grammar to model algae plant growth. Their applications soon turned graphical (e.g., drawing fractals), and eventually they were applied to music generation. Given that our prototype is assistive, we also took the user interface and user experience design into its well-deserved consideration. Our implemented interface is straightforward, simple to use with a structured visual hierarchy and flow and enables musicians and composers to select their desired instruments; select L-Systems for generating music or create their own custom ones and edit musical parameters (e.g., scale and octave range) to further control the outcome of L-Music, which is musical fragments that a musician or composer can then use in their own works. Three musical interpretations on L-Systems were implemented: a random interpretation, a scale-based interpretation, and a polyphonic interpretation. All three approaches produced interesting musical ideas, which we found to be potentially usable by musicians and composers in their own creative works. Although positive results were obtained, the developed prototype has many improvements for future work. Further musical interpretations can be added, as well as increasing the number of possible musical parameters that a user can edit. We also identified the possibility of giving the user control over what musical meaning L-Systems have as an interesting future challenge.Sistemas de geração de música têm sido alvo de investigação durante períodos alargados de tempo. Recentemente, tem havido esforços em passar o conhecimento adquirido de sistemas de geração de música autónomos e assistidos para as mãos do músico e compositor. Com estas ferramentas, o processo criativo pode ser enaltecido ou completamente substituído por máquinas. O presente trabalho visa contribuir para a investigação de sistemas de composição musical assistida. Para tal, foi efetuado um estudo do estado da arte destas temáticas, sendo que foram encontradas diversas metodologias que ofereciam resultados interessantes de um ponto de vista técnico e musical. Os sistemas de Lindenmayer, ou L-Systems, foram selecionados como a abordagem mais interessante, e menos explorada, para desenvolver um protótipo de um sistema de composição musical assistido com o nome L-Music, devido à sua capacidade de produzirem resultados complexos a partir de estruturas simples. Os L-Systems, inicialmente propostos para modelar o crescimento de plantas de algas, são gramáticas formais, cujo processo de reescrita de strings acontece de forma paralela. As suas aplicações rapidamente evoluíram para interpretações gráficas (p.e., desenhar fractais), e eventualmente também foram aplicados à geração de música. Dada a natureza assistida do protótipo desenvolvido, houve uma especial atenção dada ao design da interface e experiência do utilizador. Esta, é concisa e simples, tendo uma hierarquia visual estruturada para oferecer uma orientação coesa ao utilizador. Neste protótipo, os utilizadores podem selecionar instrumentos; selecionar L-Systems ou criar os seus próprios, e editar parâmetros musicais (p.e., escala e intervalo de oitavas) de forma a gerarem excertos musicais que possam usar nas suas próprias composições. Foram implementadas três interpretações musicais de L-Systems: uma interpretação aleatória, uma interpretação à base de escalas e uma interpretação polifónica. Todas as interpretações produziram resultados musicais interessantes, e provaram ter potencial para serem utilizadas por músicos e compositores nos seus trabalhos criativos. Embora tenham sido alcançados resultados positivos, o protótipo desenvolvido apresenta múltiplas melhorias para trabalho futuro. Entre elas estão, por exemplo, a adição de mais interpretações musicais e a adição de mais parâmetros musicais editáveis pelo utilizador. A possibilidade de um utilizador controlar o significado musical de um L-System também foi identificada como uma proposta futura relevante
    corecore