19 research outputs found

    Cross-Modal Variational Inference For Bijective Signal-Symbol Translation

    Get PDF
    International audienceExtraction of symbolic information from signals is an active field of research enabling numerous applications especially in the Musical Information Retrieval domain. This complex task, that is also related to other topics such as pitch extraction or instrument recognition, is a demanding subject that gave birth to numerous approaches , mostly based on advanced signal processing-based algorithms. However, these techniques are often non-generic, allowing the extraction of definite physical properties of the signal (pitch, octave), but not allowing arbitrary vocabularies or more general annotations. On top of that, these techniques are one-sided, meaning that they can extract symbolic data from an audio signal, but cannot perform the reverse process and make symbol-to-signal generation. In this paper, we propose an bijective approach for signal/symbol translation by turning this problem into a density estimation task over signal and symbolic domains, considered both as related random variables. We estimate this joint distribution with two different variational auto-encoders, one for each domain, whose inner representations are forced to match with an additive constraint, allowing both models to learn and generate separately while allowing signal-to-symbol and symbol-to-signal inference. In this article, we test our models on pitch, octave and dynamics symbols, which comprise a fundamental step towards music transcription and label-constrained audio generation. In addition to its versatility, this system is rather light during training and generation while allowing several interesting creative uses that we outline at the end of the article

    DYCI2 agents: merging the "free", "reactive", and "scenario-based" music generation paradigms

    Get PDF
    International audienceThe collaborative research and development project DYCI2, Creative Dynamics of Improvised Interaction, focuses on conceiving, adapting, and bringing into play efficient models of artificial listening, learning, interaction, and generation of musical contents. It aims at developing creative and autonomous digital musical agents able to take part in various human projects in an interactive and artistically credible way; and, in the end, at contributing to the perceptive and communicational skills of embedded artificial intelligence. The concerned areas are live performance, production, pedagogy, and active listening. This paper gives an overview focusing on one of the three main research issues of this project: conceiving multi-agent architectures and models of knowledge and decision in order to explore scenarios of music co-improvisation involving human and digital agents. The objective is to merge the usually exclusive "free" , "reactive", and "scenario-based" paradigms in interactive music generation to adapt to a wide range of musical contexts involving hybrid temporality and multimodal interactions

    Generation of two induced pluripotent stem cell lines and the corresponding isogenic controls from Parkinsonā€™s disease patients carrying the heterozygous mutations c.815G>A (p.R272Q) or c.1348C>T (p.R450C) in the RHOT1 gene encoding Miro1

    Get PDF
    Fibroblasts from two Parkinsonā€™s disease (PD) patients carrying either the heterozygous mutation c.815G>A (Miro1 p.R272Q) or c.1348C>T (Miro1 p.R450C) in the RHOT1 gene, were converted into induced pluripotent stem cells (iPSCs) using RNA-based and episomal reprogramming, respectively. The corresponding isogenic gene-corrected lines have been generated using CRISPR/Cas9 technology. These two isogenic pairs will be used to study Miro1-related molecular mechanisms underlying neurodegeneration in relevant iPSC-derived neuronal models (e.g., midbrain dopaminergic neurons and astrocytes)

    FUNCTIONAL CHARACTERIZATION OF NEURODEGENERATION IN CELLULAR AND MOUSE MODELS OF PARKINSONā€™S DISEASE CARRYING PATHOGENIC VARIANTS IN THE RHOT1 GENE ENCODING MIRO1

    No full text
    Parkinsonā€™s disease (PD) is the fastest growing neurological disorder, and the first most common neurodegenerative movement disorder, with patient number expected to double in 2040 (Dorsey et al., 2018b). While most PD cases are sporadic, approximately 10% of patients develop PD due to genetic causes. Interestingly, many of these PD causing genes are related to mitochondrial function, which is in line with the main pathological hallmark of PD, namely the selective death of the dopaminergic (DA) neurons in the Substantia Nigra Pars Compacta (SNpc) of the brain. Indeed, DA neurons heavily rely on adenosine triphosphate (ATP) production via mitochondrial oxidative phosphorylation, to sustain their pace-making activity. The last few years saw an increasing interest in the Mitochondrial Rho GTPase 1 (Miro1) protein, a mitochondria-anchored cytosolic calcium sensor involved in the regulation of mitochondria-ER contact sites (MERCs), mitochondrial transport, and mitophagy. Our team previously demonstrated in fibroblasts that four different Miro1 mutations found in PD patients pathologically affected calcium homeostasis, mitochondrial quality control, and MERCs formation (Berenguer-Escuder et al., 2019a; Grossmann et al., 2019a). Moreover, Miro1 p.R272Q mutant induced-pluripotent stem cells (iPSCs)-derived neurons, display a significant impairment of cytosolic calcium handling compared to age/gender matched controls, similarly to fibroblast from this patient (Berenguer-Escuder et al., 2020a). This phenotype was accompanied by MERCs levels dysregulation as well as mitophagy and autophagy impairment, supporting the role of Miro1 as a rare genetic risk factor for PD (Berenguer-Escuder et al., 2020a). Moreover, recent studies revealed a pathological retention of Miro1 upon mitophagy induction, thus delaying mitophagy in cells from genetic PD as well as in a significant proportion of sporadic PD patients (Hsieh et al., 2016a, 2019a; Shaltouki et al., 2018a). In this thesis, we first generated and characterized iPSCs and isogenic controls lines from the four aforementioned PD patients. We then explored the pathogenic effect of the Miro1 p.R272Q mutation in three different models, namely iPSC-derived neurons, midbrain organoids (MO), and Miro1 p.R285Q knockin (KI) mice. We first confirmed the exacerbated sensitivity to calcium stress in vitro in neurons, and unveiled that it was also accompanied by mitochondrial bioenergetics impairment (lower ATP levels) and elevated reactive oxygen species (ROS) production in both 2D and 3D models, finally resulting in DA neurons death in MO. This was accompanied by elevated SNCA mRNA expression in both models, as well as higher Ī±-synuclein protein amounts in neurons, which was already found in post-mortem samples from PD patients (Shaltouki et al., 2018a). Lastly, our mouse model displayed significant neuronal loss in its SNpc, as well as impaired motor learning, recapitulating two PD signs found in patients. Taken together, these results support the involvement of Miro1 in PD pathogenesis, and highlights the potential of Miro1 variants to be used as novel, promising models for PD in vitro and in vivo

    Guidages de l'improvisation

    No full text
    Ce document porte sur la conception dā€™un systĆØme dā€™improvisation Ć©tant dā€™une part rĆ©actif au contexte extĆ©rieur, et spĆ©cifiĆ©eselon une macro-structure temporelle appelĆ©e scĆ©nario. Cette conception sera matĆ©rialisĆ©e par la conception dā€™un prototype. AprĆØs un Ć©tat de lā€™art sur le domaine de lā€™improvisation par ordinateur, orientĆ© par quelques concepts-clĆ©s, et une description plus prĆ©cise de deux systĆØmes dā€™improvisation, ImproteK et SoMax, sur lequel le prototype sā€™appuie seront dĆ©crits le fonctionnement de ce systĆØme ainsi que la rĆ©alisation du prototype

    ReprƩsentations variationnelles de signaux musicaux et espaces gƩnƩratifs

    No full text
    Among the diverse research fields within computer music, synthesis and generation of audio signals epitomize the cross-disciplinarity of this domain, jointly nourishing both scientific and artistic practices since its creation. Inherent in computer music since its genesis, audio generation has inspired numerous approaches, evolving both with musical practices and scientific/technical advances. Moreover, some syn- thesis processes also naturally handle the reverse process, named analysis, such that synthesis parameters can also be partially or totally extracted from actual sounds, and providing an alternative representation of the analyzed audio signals.On top of that, the recent rise of machine learning algorithms earnestly questioned the field of scientific research, bringing powerful data-centred methods that raised several epistemological questions amongst researchers, in spite of their efficiency. Especially, a family of machine learning methods, called generative models, are focused on the generation of original content using features extracted from an existing dataset. In that case, such methods not only questioned previous approaches in generation, but also the way of integrating this methods into existing creative processes. While these new generative frameworks are progressively introduced in the domain of image generation, the application of such generative techniques in audio synthesis is still marginal.In this work, we aim to propose a new audio analysis-synthesis framework based on these modern gen- erative models, enhanced by recent advances in machine learning. We first review existing approaches, both in sound synthesis and in generative machine learning, and focus on how our work inserts itself in both practices and what can be expected from their collation. Subsequently, we focus a little more on generative models, and how modern advances in the domain can be exploited to allow us learning complex sound distributions, while being sufficiently flexible to be integrated in the creative flow of the user. We then propose an inference / generation process, mirroring analysis/synthesis paradigms that are natural in the audio processing domain, using latent models that are based on a continuous higher-level space, that we use to control the generation. We first provide preliminary results of our method applied on spectral information, extracted from several datasets, and evaluate both qualitatively and quantitatively the obtained results. Subsequently, we study how to make these methods more suitable for learning audio data, tackling successively three different aspects. First, we propose two different latent regularization strategies specifically designed for audio, based on and signal / symbol translation and perceptual constraints. Then, we propose different methods to address the inner temporality of musical signals, based on the extraction of multi-scale representations and on prediction, that allow the obtained generative spaces that also model the dynamics of the signal.As a last chapter, we swap our scientific approach to a more research & creation-oriented point of view: first, we describe the architecture and the design of our open-source library, vsacids, aiming to be used by expert and non-expert music makers as an integrated creation tool. Then, we propose n first musical use of our system by the creation of a real-time performance, called aego, based jointly on our framework vsacids and an explorative agent using reinforcement learning to be trained during the performance. Finally, we draw some conclusions on the different manners to improve and reinforce the proposed generation method, as well as possible further creative applications.AĢ€ travers les diffeĢrents domaines de recherche de la musique computationnelle, lā€™analysie et la geĢneĢration de signaux audio sont lā€™exemple parfait de la trans-disciplinariteĢ de ce domaine, nourrissant simultaneĢment les pratiques scientifiques et artistiques depuis leur creĢation. InteĢgreĢe aĢ€ la musique computationnelle depuis sa creĢation, la syntheĢ€se sonore a inspireĢ de nombreuses approches musicales et scientifiques, eĢvoluant de pair avec les pratiques musicales et les avanceĢes technologiques et scientifiques de son temps. De plus, certaines meĢthodes de syntheĢ€se sonore permettent aussi le processus inverse, appeleĢ analyse, de sorte que les parameĢ€tres de syntheĢ€se dā€™un certain geĢneĢrateur peuvent eĢ‚tre en partie ou entieĢ€rement obtenus aĢ€ partir de sons donneĢs, pouvant ainsi eĢ‚tre consideĢreĢs comme une repreĢsentation alternative des signaux analyseĢs. ParalleĢ€lement, lā€™inteĢreĢ‚t croissant souleveĢ par les algorithmes dā€™apprentissage au- tomatique a vivement questionneĢ le monde scientifique, apportant de puissantes meĢthodes dā€™analyse de donneĢes suscitant de nombreux questionnements eĢpisteĢmologiques chez les chercheurs, en deĢpit de leur effectiviteĢ pratique. En particulier, une famille de meĢthodes dā€™apprentissage automatique, nommeĢe modeĢ€les geĢneĢratifs, sā€™inteĢressent aĢ€ la geĢneĢration de contenus originaux aĢ€ partir de caracteĢristiques extraites directement des donneĢes analyseĢes. Ces meĢthodes nā€™interrogent pas seulement les approches preĢceĢdentes, mais aussi sur lā€™inteĢgration de ces nouvelles meĢthodes dans les processus creĢatifs existants. Pourtant, alors que ces nouveaux processus geĢneĢratifs sont progressivement inteĢgreĢs dans le domaine la geĢneĢration dā€™image, lā€™application de ces techniques en syntheĢ€se audio reste marginale.Dans cette theĢ€se, nous proposons une nouvelle meĢthode dā€™analyse-syntheĢ€se baseĢs sur ces derniers modeĢ€les geĢneĢratifs, depuis renforceĢs par les avanceĢes modernes dans le domaine de lā€™apprentissage automatique. Dans un premier temps, nous examinerons les approches existantes dans le domaine des systeĢ€mes geĢneĢratifs, sur comment notre travail peut sā€™inseĢrer dans les pratiques de syntheĢ€se sonore existantes, et que peut-on espeĢrer de lā€™hybridation de ces deux approches. Ensuite, nous nous focaliserons plus preĢciseĢment sur comment les reĢcentes avanceĢes accomplies dans ce domaine dans ce domaine peuvent eĢ‚tre exploiteĢes pour lā€™apprentissage de distributions sonores complexes, tout en eĢtant suffisam- ment flexibles pour eĢ‚tre inteĢgreĢes dans le processus creĢatif de lā€™utilisateur. Nous proposons donc un processus dā€™infeĢrence / generation, refleĢtant les paradigmes dā€™analyse-syntheĢ€se existant dans le domaine de geĢneĢration audio, baseĢ sur lā€™usage de modeĢ€les latents continus que lā€™on peut utiliser pour controĢ‚ler la geĢneĢration. Pour ce faire, nous eĢtudierons deĢjaĢ€ les reĢsultats preĢliminaires obtenus par cette meĢthode sur lā€™apprentissage de distributions spectrales, prises dā€™ensembles de donneĢes diversifieĢs, en adoptant une approche aĢ€ la fois quantitative et qualitative. Ensuite, nous proposerons dā€™ameĢliorer ces meĢthodes de manieĢ€re speĢcifique aĢ€ lā€™audio sur trois aspects distincts. Dā€™abord, nous proposons deux strateĢgies de reĢgularisation diffeĢrentes pour lā€™analyse de signaux audio : une baseĢe sur la traduction signal/ symbole, ainsi quā€™une autre baseĢe sur des contraintes perceptuelles. Nous passerons par la suite aĢ€ la dimension temporelle de ces signaux audio, proposant de nouvelles meĢthodes baseĢes sur lā€™extraction de repreĢsenta- tions temporelles multi-eĢchelle et sur une taĢ‚che suppleĢmentaire de preĢdiction, permettant la modeĢlisation de caracteĢristiques dynamiques par les espaces geĢneĢratifs obtenus.En dernier lieu, nous passerons dā€™une approche scientifique aĢ€ une approche plus orienteĢe vers un point de vue recherche & creĢation. PremieĢ€rement, nous preĢsenterons notre librairie open-source, vsacids, visant aĢ€ eĢ‚tre employeĢe par des creĢateurs experts et non-experts comme un outil inteĢgreĢ. Ensuite, nous proposons une premieĢ€re utilisation musicale de notre systeĢ€me par la creĢation dā€™une performance temps reĢel, nommeĢe Ʀgo, baseĢe aĢ€ la fois sur notre librarie et sur un agent dā€™exploration appris dynamiquement par renforcement au cours de la performance. Enfin, nous tirons les conclusions du travail accomplijusquā€™aĢ€ maintenant, concernant les possibles ameĢliorations et deĢveloppements de la meĢthode de syntheĢ€se proposeĢe, ainsi que sur de possibles applications creĢatives

    Machine Learning for Computer Music Multidisciplinary Research: A Practical Case Study

    No full text
    International audienceThis paper presents a multidisciplinary case study of practice with machine learning for computer music. It builds on the scientific study of two machine learning models respectively developed for data-driven sound synthesis and interactive exploration. It details how the learning capabilities of the two models were leveraged to design and implement a musical instrument focused on embodied musical interaction. It then describes how this instrument was employed and applied to the composition and performance of aego, an improvisational piece with interactive sound and image for one performer. We discuss the outputs of our research and creation process, and build on this to expose our personal insights and reflections on the multidisciplinary opportunities framed by machine learning for computer music
    corecore