4 research outputs found

    Signatures statistiques de l'apprentissage dans le bulbe olfactif et nouvelle approche pour la segmentation d'images empilées

    No full text
    With the development of genetically encoded fluorescent calcium indicators, recording neurons with 2-photon microscopy in an awake, behaving animal has become an important neuroscience technique which allows to correlate behavior and neuronal activity, providing a better understanding of the learning and memory circuits. The main focus of this thesis has been to identify statistical signatures of learning in the neural activity of olfactory bulb granule neuron dendrites recorded using 2-photon calcium imaging as the animal performed an operant conditioning task. We employ self-supervised learning to project dendrite activity time-series into a low-dimensional latent space. Performing supervised classification on compressed representations show that neural activity strongly predicts odors presented to the animal and the reward received. We next identify a robust signature of the animal’s learning encoded in the neural signal. In the second part of the thesis, we aim to ease and accelerate dendrite structures segmentation in microscopic images. Visualization and interaction with complex 3D data can be improved by virtual reality. We allow users to easily provide image annotations, from which we apply a cloud based one-shot learning to segment microscopic and medical images, allowing fast computation and robust accuracy.Le dĂ©veloppement d'indicateurs fluorescents au calcium autorisant l'acquisition de l'activitĂ© neuronale a fourni une meilleure comprĂ©hension des circuits de l'apprentissage et de la mĂ©moire. Dans cette thĂšse, nous avons voulu identifier des signatures statistiques de l'apprentissage dans l'activitĂ© des dendrites des cellules granulaires du bulbe olfactif, enregistrĂ©es par imagerie calcique Ă  deux photons pendant que l'animal effectuait une tĂąche de conditionnement opĂ©rant. AprĂšs avoir projetĂ© les donnĂ©es d'activitĂ© neuronale dans un espace latent de faible dimension, une classification supervisĂ©e sur ces reprĂ©sentations compressĂ©es a montrĂ© que l'activitĂ© neuronale prĂ©dit fortement les odeurs prĂ©sentĂ©es Ă  l'animal et la rĂ©compense reçue. Nous avons ensuite identifiĂ© une signature robuste de l'apprentissage encodĂ©e dans le signal neuronal. Dans une deuxiĂšme partie, nous cherchons Ă  faciliter et accĂ©lĂ©rer la segmentation de structures dendritiques. La rĂ©alitĂ© virtuelle est particuliĂšrement adaptĂ©e Ă  la visualisation et l'interaction avec des donnĂ©es en 3D. En simplifiant l'annotation d'images, nous avons pu appliquer un apprentissage "one-shot", dans le cloud, pour segmenter des images microscopiques et mĂ©dicales, permettant un calcul rapide et prĂ©cis

    Signatures statistiques de l'apprentissage dans le bulbe olfactif et nouvelle approche pour la segmentation d'images empilées

    No full text
    With the development of genetically encoded fluorescent calcium indicators, recording neurons with 2-photon microscopy in an awake, behaving animal has become an important neuroscience technique which allows to correlate behavior and neuronal activity, providing a better understanding of the learning and memory circuits. The main focus of this thesis has been to identify statistical signatures of learning in the neural activity of olfactory bulb granule neuron dendrites recorded using 2-photon calcium imaging as the animal performed an operant conditioning task. We employ self-supervised learning to project dendrite activity time-series into a low-dimensional latent space. Performing supervised classification on compressed representations show that neural activity strongly predicts odors presented to the animal and the reward received. We next identify a robust signature of the animal’s learning encoded in the neural signal. In the second part of the thesis, we aim to ease and accelerate dendrite structures segmentation in microscopic images. Visualization and interaction with complex 3D data can be improved by virtual reality. We allow users to easily provide image annotations, from which we apply a cloud based one-shot learning to segment microscopic and medical images, allowing fast computation and robust accuracy.Le dĂ©veloppement d'indicateurs fluorescents au calcium autorisant l'acquisition de l'activitĂ© neuronale a fourni une meilleure comprĂ©hension des circuits de l'apprentissage et de la mĂ©moire. Dans cette thĂšse, nous avons voulu identifier des signatures statistiques de l'apprentissage dans l'activitĂ© des dendrites des cellules granulaires du bulbe olfactif, enregistrĂ©es par imagerie calcique Ă  deux photons pendant que l'animal effectuait une tĂąche de conditionnement opĂ©rant. AprĂšs avoir projetĂ© les donnĂ©es d'activitĂ© neuronale dans un espace latent de faible dimension, une classification supervisĂ©e sur ces reprĂ©sentations compressĂ©es a montrĂ© que l'activitĂ© neuronale prĂ©dit fortement les odeurs prĂ©sentĂ©es Ă  l'animal et la rĂ©compense reçue. Nous avons ensuite identifiĂ© une signature robuste de l'apprentissage encodĂ©e dans le signal neuronal. Dans une deuxiĂšme partie, nous cherchons Ă  faciliter et accĂ©lĂ©rer la segmentation de structures dendritiques. La rĂ©alitĂ© virtuelle est particuliĂšrement adaptĂ©e Ă  la visualisation et l'interaction avec des donnĂ©es en 3D. En simplifiant l'annotation d'images, nous avons pu appliquer un apprentissage "one-shot", dans le cloud, pour segmenter des images microscopiques et mĂ©dicales, permettant un calcul rapide et prĂ©cis

    Towards Human in the Loop Analysis of Complex Point Clouds: Advanced Visualizations, Quantifications, and Communication Features in Virtual Reality

    No full text
    International audienceMultiple fields in biological and medical research produce large amounts of point cloud data with high dimensionality and complexity. In addition, a large set of experiments generate point clouds, including segmented medical data or single-molecule localization microscopy. In the latter, individual molecules are observed within their natural cellular environment. Analyzing this type of experimental data is a complex task and presents unique challenges, where providing extra physical dimensions for visualization and analysis could be beneficial. Furthermore, whether highly noisy data comes from single-molecule recordings or segmented medical data, the necessity to guide analysis with user intervention creates both an ergonomic challenge to facilitate this interaction and a computational challenge to provide fluid interactions as information is being processed. Several applications, including our software DIVA for image stack and our platform Genuage for point clouds, have leveraged Virtual Reality (VR) to visualize and interact with data in 3D. While the visualization aspects can be made compatible with different types of data, quantifications, on the other hand, are far from being standard. In addition, complex analysis can require significant computational resources, making the real-time VR experience uncomfortable. Moreover, visualization software is mainly designed to represent a set of data points but lacks flexibility in manipulating and analyzing the data. This paper introduces new libraries to enhance the interaction and human-in-theloop analysis of point cloud data in virtual reality and integrate them into the open-source platform Genuage. We first detail a new toolbox of communication tools that enhance user experience and improve flexibility. Then, we introduce a mapping toolbox allowing the representation of physical properties in space overlaid on a 3D mesh while maintaining a point cloud dedicated shader. We introduce later a new and programmable video capture tool in VR and desktop modes for intuitive data dissemination. Finally, we highlight the protocols that allow simultaneous analysis and fluid manipulation of data with a high refres

    New Approach to Accelerated Image Annotation by Leveraging Virtual Reality and Cloud Computing

    No full text
    International audienceThree-dimensional imaging is at the core of medical imaging and is becoming a standard in biological research. As a result, there is an increasing need to visualize, analyze and interact with data in a natural three-dimensional context. By combining stereoscopy and motion tracking, commercial virtual reality (VR) headsets provide a solution to this critical visualization challenge by allowing users to view volumetric image stacks in a highly intuitive fashion. While optimizing the visualization and interaction process in VR remains an active topic, one of the most pressing issue is how to utilize VR for annotation and analysis of data. Annotating data is often a required step for training machine learning algorithms. For example, enhancing the ability to annotate complex three-dimensional data in biological research as newly acquired data may come in limited quantities. Similarly, medical data annotation is often time-consuming and requires expert knowledge to identify structures of interest correctly. Moreover, simultaneous data analysis and visualization in VR is computationally demanding. Here, we introduce a new procedure to visualize, interact, annotate and analyze data by combining VR with cloud computing. VR is leveraged to provide natural interactions with volumetric representations of experimental imaging data. In parallel, cloud computing performs costly computations to accelerate the data annotation with minimal input required from the user. We demonstrate multiple proof-of-concept applications of our approach on volumetric fluorescent microscopy images of mouse neurons and tumor or organ annotations in medical images
    corecore