125 research outputs found

    Lipids or Proteins: Who Is Leading the Dance at Membrane Contact Sites?

    Get PDF
    Understanding the mode of action of membrane contact sites (MCSs) across eukaryotic organisms at the near-atomic level to infer function at the cellular and tissue levels is a challenge scientists are currently facing. These peculiar systems dedicated to inter-organellar communication are perfect examples of cellular processes where the interplay between lipids and proteins is critical. In this mini review, we underline the link between membrane lipid environment, the recruitment of proteins at specialized membrane domains and the function of MCSs. More precisely, we want to give insights on the crucial role of lipids in defining the specificity of plant endoplasmic reticulum (ER)-plasma membrane (PM) MCSs and we further propose approaches to study them at multiple scales. Our goal is not so much to go into detailed description of MCSs, as there are numerous focused reviews on the subject, but rather try to pinpoint the critical elements defining those structures and give an original point of view by considering the subject from a near-atomic angle with a focus on lipids. We review current knowledge as to how lipids can define MCS territories, play a role in the recruitment and function of the MCS-associated proteins and in turn, how the lipid environment can be modified by proteins

    Mapping Through Listening

    Get PDF
    Gesture-to-sound mapping is generally defined as the association between gestural and sound parameters. This article describes an approach that brings forward the perception-action loop as a fundamental design principle for gesture–sound mapping in digital music instrument. Our approach considers the processes of listening as the foundation – and the first step – in the design of action-sound relationships. In this design process, the relationship between action and sound is derived from actions that can be perceived in the sound. Building on previous works on listening modes and gestural descriptions we proposed to distinguish between three mapping strategies: instantaneous, temporal, and metaphoric. Our approach makes use of machine learning techniques for building prototypes, from digital music instruments to interactive installations. Four different examples of scenarios and prototypes are described and discussed

    Integrative and comparative genomic analyses identify clinically relevant pulmonary carcinoid groups and unveil the supra-carcinoids

    Get PDF
    International audienceThe worldwide incidence of pulmonary carcinoids is increasing, but little is known about their molecular characteristics. Through machine learning and multi-omics factor analysis, we compare and contrast the genomic profiles of 116 pulmonary carcinoids (including 35 atypical), 75 large-cell neuroendocrine carcinomas (LCNEC), and 66 small-cell lung cancers. Here we report that the integrative analyses on 257 lung neuroendocrine neoplasms stratify atypical carcinoids into two prognostic groups with a 10-year overall survival of 88% and 27%, respectively. We identify therapeutically relevant molecular groups of pulmonary car-cinoids, suggesting DLL3 and the immune system as candidate therapeutic targets; we confirm the value of OTP expression levels for the prognosis and diagnosis of these diseases, and we unveil the group of supra-carcinoids. This group comprises samples with carcinoid-like morphology yet the molecular and clinical features of the deadly LCNEC, further supporting the previously proposed molecular link between the low-and high-grade lung neuroendocrine neoplasms

    Apprentissage des Relations entre Mouvement et Son par DĂ©monstration

    Get PDF
    Le design du mapping (ou couplage) entre mouvement et son est essentiel Ă  la crĂ©ation de systĂšmes interactifs sonores et musicaux. Cette thĂšse propose une approche appelĂ©e mapping par dĂ©monstration qui permet aux utilisateurs de crĂ©er des interactions entre mouvement et son par des exemples de gestes effectuĂ©s pendant l'Ă©coute. Le mapping par dĂ©monstration est un cadre conceptuel et technique pour la crĂ©ation d'interactions sonores Ă  partir de dĂ©monstrations d'associations entre mouvement et son. L'approche utilise l'apprentissage automatique interactif pour construire le mapping Ă  partir de dĂ©monstrations de l'utilisateur. Nous nous proposons d’exploiter la nature gĂ©nĂ©rative des modĂšles probabilistes, de la reconnaissance de geste continue Ă  la gĂ©nĂ©ration de paramĂštres sonores. Nous avons Ă©tudiĂ© plusieurs modĂšles probabilistes, Ă  la fois des modĂšles instantanĂ©s (ModĂšles de MĂ©langes Gaussiens) et temporels (ModĂšles de Markov CachĂ©s) pour la reconnaissance, la rĂ©gression, et la gĂ©nĂ©ration de paramĂštres sonores. Nous avons adoptĂ© une perspective d’apprentissage automatique interactif, avec un intĂ©rĂȘt particulier pour l’apprentissage Ă  partir d'un nombre restreint d’exemples et l’infĂ©rence en temps rĂ©el. Les modĂšles reprĂ©sentent soit uniquement le mouvement, soit intĂšgrent une reprĂ©sentation conjointe des processus gestuels et sonores, et permettent alors de gĂ©nĂ©rer les trajectoires de paramĂštres sonores continĂ»ment depuis le mouvement. Nous avons explorĂ© un ensemble d’applications en pratique du mouvement et danse, en design d’interaction sonore, et en musique.Designing the relationship between motion and sound is essential to the creation of interactive systems. This thesis proposes an approach to the design of the mapping between motion and sound called Mapping-by-Demonstration. Mapping-by-Demonstration is a framework for crafting sonic interactions from demonstrations of embodied associations between motion and sound. It draws upon existing literature emphasizing the importance of bodily experience in sound perception and cognition. It uses an interactive machine learning approach to build the mapping iteratively from user demonstrations. Drawing upon related work in the fields of animation, speech processing and robotics, we propose to fully exploit the generative nature of probabilistic models, from continuous gesture recognition to continuous sound parameter generation. We studied several probabilistic models under the light of continuous interaction. We examined both instantaneous (Gaussian Mixture Model) and temporal models (Hidden Markov Model) for recognition, regression and parameter generation. We adopted an Interactive Machine Learning perspective with a focus on learning sequence models from few examples, and continuously performing recognition and mapping. The models either focus on movement, or integrate a joint representation of motion and sound. In movement models, the system learns the association between the input movement and an output modality that might be gesture labels or movement characteristics. In motion-sound models, we model motion and sound jointly, and the learned mapping directly generates sound parameters from input movements. We explored a set of applications and experiments relating to real-world problems in movement practice, sonic interaction design, and music. We proposed two approaches to movement analysis based on Hidden Markov Model and Hidden Markov Regression, respectively. We showed, through a use-case in Tai Chi performance, how the models help characterizing movement sequences across trials and performers. We presented two generic systems for movement sonification. The first system allows users to craft hand gesture control strategies for the exploration of sound textures, based on Gaussian Mixture Regression. The second system exploits the temporal modeling of Hidden Markov Regression for associating vocalizations to continuous gestures. Both systems gave birth to interactive installations that we presented to a wide public, and we started investigating their interest to support gesture learning

    Apprentissage des Relations entre Mouvement et Son par DĂ©monstration

    No full text
    Designing the relationship between motion and sound is essential to the creation of interactive systems. This thesis proposes an approach to the design of the mapping between motion and sound called Mapping-by-Demonstration. Mapping-by-Demonstration is a framework for crafting sonic interactions from demonstrations of embodied associations between motion and sound. It draws upon existing literature emphasizing the importance of bodily experience in sound perception and cognition. It uses an interactive machine learning approach to build the mapping iteratively from user demonstrations. Drawing upon related work in the fields of animation, speech processing and robotics, we propose to fully exploit the generative nature of probabilistic models, from continuous gesture recognition to continuous sound parameter generation. We studied several probabilistic models under the light of continuous interaction. We examined both instantaneous (Gaussian Mixture Model) and temporal models (Hidden Markov Model) for recognition, regression and parameter generation. We adopted an Interactive Machine Learning perspective with a focus on learning sequence models from few examples, and continuously performing recognition and mapping. The models either focus on movement, or integrate a joint representation of motion and sound. In movement models, the system learns the association between the input movement and an output modality that might be gesture labels or movement characteristics. In motion-sound models, we model motion and sound jointly, and the learned mapping directly generates sound parameters from input movements. We explored a set of applications and experiments relating to real-world problems in movement practice, sonic interaction design, and music. We proposed two approaches to movement analysis based on Hidden Markov Model and Hidden Markov Regression, respectively. We showed, through a use-case in Tai Chi performance, how the models help characterizing movement sequences across trials and performers. We presented two generic systems for movement sonification. The first system allows users to craft hand gesture control strategies for the exploration of sound textures, based on Gaussian Mixture Regression. The second system exploits the temporal modeling of Hidden Markov Regression for associating vocalizations to continuous gestures. Both systems gave birth to interactive installations that we presented to a wide public, and we started investigating their interest to support gesture learning.Le design du mapping (ou couplage) entre mouvement et son est essentiel Ă  la crĂ©ation de systĂšmes interactifs sonores et musicaux. Cette thĂšse propose une approche appelĂ©e mapping par dĂ©monstration qui permet aux utilisateurs de crĂ©er des interactions entre mouvement et son par des exemples de gestes effectuĂ©s pendant l'Ă©coute. Le mapping par dĂ©monstration est un cadre conceptuel et technique pour la crĂ©ation d'interactions sonores Ă  partir de dĂ©monstrations d'associations entre mouvement et son. L'approche utilise l'apprentissage automatique interactif pour construire le mapping Ă  partir de dĂ©monstrations de l'utilisateur. Nous nous proposons d’exploiter la nature gĂ©nĂ©rative des modĂšles probabilistes, de la reconnaissance de geste continue Ă  la gĂ©nĂ©ration de paramĂštres sonores. Nous avons Ă©tudiĂ© plusieurs modĂšles probabilistes, Ă  la fois des modĂšles instantanĂ©s (ModĂšles de MĂ©langes Gaussiens) et temporels (ModĂšles de Markov CachĂ©s) pour la reconnaissance, la rĂ©gression, et la gĂ©nĂ©ration de paramĂštres sonores. Nous avons adoptĂ© une perspective d’apprentissage automatique interactif, avec un intĂ©rĂȘt particulier pour l’apprentissage Ă  partir d'un nombre restreint d’exemples et l’infĂ©rence en temps rĂ©el. Les modĂšles reprĂ©sentent soit uniquement le mouvement, soit intĂšgrent une reprĂ©sentation conjointe des processus gestuels et sonores, et permettent alors de gĂ©nĂ©rer les trajectoires de paramĂštres sonores continĂ»ment depuis le mouvement. Nous avons explorĂ© un ensemble d’applications en pratique du mouvement et danse, en design d’interaction sonore, et en musique

    Provincial Public Expenditure in China: A Tale of Profligacy

    Get PDF
    Etudes & documentsThis paper examines the cyclicality of provincial expenditure in China during the period 1978-2013. We assess whether provincial expenditure has been pro-cyclical using panel data for our analysis. Profligacy is found to be a regular feature of provincial fiscal policy. This profligacy occurs both in good and bad times and has markedly increased since 1994 with the increased autonomy of provinces. We further find that the profligacy bias is mitigated when financial constraints are relaxed, the remaining political life of the governor is long, government efficiency is strong, corruption incidence is low, and governments are large

    A Machine learning approach to violin bow technique classification: a comparison between IMU and MOCAP systems

    No full text
    Comunicació presentada a: 5th international Workshop on Sensor-based Activity Recognition and Interaction celebrat el 20 i 21 de setembre de 2018 a Berlin, Alemanya.Motion Capture (MOCAP) Systems have been used to analyze body motion and postures in biomedicine, sports, rehabilitation, and music. With the aim to compare the precision of low-cost devices for motion tracking (e.g. Myo) with the precision of MOCAP systems in the context of music performance, we recorded MOCAP and Myo data of a top professional violinist executing four fundamental bowing techniques (i.e. Détaché, Martelé, Spiccato and Ricochet). Using the recorded data we applied machine learning techniques to train models to classify the four bowing techniques. Despite intrinsic differences between the MOCAP and low-cost data, the Myo-based classifier resulted in slightly higher accuracy than the MOCAP-based classifier. This result shows that it is possible to develop music-gesture learning applications based on low-cost technology which can be used in home environments for self-learning practitioners.This work has been partly sponsored by the Spanish TIN project TIMUL (TIN 2013-48152-C2-2-R), the European Union Horizon 2020 research and innovation programme under grant agreement No. 688269 (TELMI project), and the Spanish Ministry of Economy and Competitiveness under the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502)

    Marcelle : un toolkit pour la conception d’interactions humain-apprentissage automatique

    No full text
    International audienceMarcelle is an open-source toolkit dedicated to the design of web applications involving human interactions with machine learning algorithms. This demonstration illustrates how Marcelle can support research and education in the field of interactive machine learning, notably in scenarios involving multiple stakeholders.Marcelle est un toolkit open-source dĂ©diĂ©e Ă  la conception d’applications web impliquant des interac- tions avec des algorithmes d’apprentissage machine. Cette dĂ©monstration illustre comment Marcelle peut stimuler la recherche et l’éducation dans le domaine de l’apprentissage automatique interactif, notamment dans des scĂ©narios impliquant plusieurs types d’utilisateurs
    • 

    corecore