3,972 research outputs found

    Persistence of neuronal representations through time and damage in the hippocampus

    Get PDF
    How do neurons encode long-term memories? Bilateral imaging of neuronal activity in the mouse hippocampus reveals that, from one day to the next, ~40% of neurons change their responsiveness to cues, but thereafter only 1% of cells change per day. Despite these changes, neuronal responses are resilient to a lack of exposure to a previously completed task or to hippocampus lesions. Unlike individual neurons, the responses of which change after a few days, groups of neurons with inter- and intrahemispheric synchronous activity show stable responses for several weeks. The likelihood that a neuron maintains its responsiveness across days is proportional to the number of neurons with which its activity is synchronous. Information stored in individual neurons is relatively labile, but it can be reliably stored in networks of synchronously active neurons

    Gesture tracking and neural activity segmentation in head-fixed behaving mice by deep learning methods

    Get PDF
    The typical approach used by neuroscientists is to study the response of laboratory animals to a stimulus while recording their neural activity at the same time. With the advent of calcium imaging technology, researchers can now study neural activity at sub-cellular resolutions in vivo. Similarly, recording the behaviour of laboratory animals is also becoming more affordable. Although it is now easier to record behavioural and neural data, this data comes with its own set of challenges. The biggest challenge, given the sheer volume of the data, is annotation. A traditional approach is to annotate the data manually, frame by frame. With behavioural data, manual annotation is done by looking at each frame and tracing the animals; with neural data, this is carried out by a trained neuroscientist. In this research, we propose automated tools based on deep learning that can aid in the processing of behavioural and neural data. These tools will help neuroscientists annotate and analyse the data they acquire in an automated and reliable way.La configuración típica empleada por los neurocientíficos consiste en estudiar la respuesta de los animales de laboratorio a un estímulo y registrar al mismo tiempo su actividad neuronal. Con la llegada de la tecnología de imágenes del calcio, los investigadores pueden ahora estudiar la actividad neuronal a resoluciones subcelulares in vivo. Del mismo modo, el registro del comportamiento de los animales de laboratorio también se está volviendo más asequible. Aunque ahora es más fácil registrar los datos del comportamiento y los datos neuronales, estos datos ofrecen su propio conjunto de desafíos. El mayor desafío es la anotación de los datos debido a su gran volumen. Un enfoque tradicional es anotar los datos manualmente, fotograma a fotograma. En el caso de los datos sobre el comportamiento, la anotación manual se hace mirando cada fotograma y rastreando los animales, mientras que, para los datos neuronales, la anotación la hace un neurocientífico capacitado. En esta investigación, proponemos herramientas automatizadas basadas en el aprendizaje profundo que pueden ayudar a procesar los datos de comportamiento y los datos neuronales.La configuració típica emprada pels neurocientífics consisteix a estudiar la resposta dels animals de laboratori a un estímul i registrar al mateix temps la seva activitat neuronal. Amb l'arribada de la tecnologia d'imatges basades en calci, els investigadors poden ara estudiar l'activitat neuronal a resolucions subcel·lulars in vivo. De la mateixa manera, el registre del comportament dels animals de laboratori també ha esdevingut molt més assequible. Tot i que ara és més fàcil registrar les dades del comportament i les dades neuronals, aquestes dades ofereixen el seu propi conjunt de reptes. El major desafiament és l'anotació de les dades, degut al seu gran volum. Un enfocament tradicional és anotar les dades manualment, fotograma a fotograma. En el cas de les dades sobre el comportament, l'anotació manual es fa mirant cada fotograma i rastrejant els animals, mentre que per a les dades neuronals, l'anotació la fa un neurocientífic capacitat. En aquesta investigació, proposem eines automatitzades basades en laprenentatge profund que poden ajudar a modelar les dades de comportament i les dades neuronals

    EZcalcium: Open-Source Toolbox for Analysis of Calcium Imaging Data

    Get PDF
    Fluorescence calcium imaging using a range of microscopy approaches, such as two-photon excitation or head-mounted “miniscopes,” is one of the preferred methods to record neuronal activity and glial signals in various experimental settings, including acute brain slices, brain organoids, and behaving animals. Because changes in the fluorescence intensity of genetically encoded or chemical calcium indicators correlate with action potential firing in neurons, data analysis is based on inferring such spiking from changes in pixel intensity values across time within different regions of interest. However, the algorithms necessary to extract biologically relevant information from these fluorescent signals are complex and require significant expertise in programming to develop robust analysis pipelines. For decades, the only way to perform these analyses was for individual laboratories to write their custom code. These routines were typically not well annotated and lacked intuitive graphical user interfaces (GUIs), which made it difficult for scientists in other laboratories to adopt them. Although the panorama is changing with recent tools like CaImAn, Suite2P, and others, there is still a barrier for many laboratories to adopt these packages, especially for potential users without sophisticated programming skills. As two-photon microscopes are becoming increasingly affordable, the bottleneck is no longer the hardware, but the software used to analyze the calcium data optimally and consistently across different groups. We addressed this unmet need by incorporating recent software solutions, namely NoRMCorre and CaImAn, for motion correction, segmentation, signal extraction, and deconvolution of calcium imaging data into an open-source, easy to use, GUI-based, intuitive and automated data analysis software package, which we named EZcalcium

    An unsupervised generative strategy for detection and characterization of rare behavioural events in mice in open fiel to assess effect of optogenetic activation of serotonergic neurons in the dorsal raphe nuclei

    Get PDF
    The purpose of our work is to provide an unsupervised deep learning tool that uses predictability of behavior as a meaningful metric to quantify the di erences between normal and abnormal behavior in the context of an experiment where mice receive optogenetic stimulation in their serotonergic neurons located in the dorsal raphe nuclei. We use generative adversarial networks to learn, on a training subset of the videos, a baseline behavioral repertoire by predicting future frames from subsequent frames in the past. By de ning a predictability index as dissimilarity between the quality of the generated prediction and the ground truth frame, we are able to determine in which frames a behavior not observed by the model during training is performed and therefore, we can detect the presence of stimulation by only analysing the uctuations of this index that indicate when the mouse is performing behaviors that are not present in the learnt baseline.O objetivo do presente trabalho é fornecer uma ferramenta de aprendizado profundo que utiliza a previsibilidade do comportamento animal como uma métrica para quanti car as diferenças entre o comportamento normal e anormal no contexto de um experimento em que os ratos recebem estimulação optogenética em seus neurónios serotoninérgicos localizados na região cerebral do núcleo dorsal da rafe. Usamos redes neurais generativas para aprender, treinando em segmentos de vídeo, um repertório comportamental básico, prevendo cenas futuras a partir de cenas subsequentes do passado. Ao defi nir um índice de previsibilidade como o grau de dissimilaridade entre a previsão gerada pelo modelo e a cena real no momento correspondente, conseguimos determinar em quais cenas o comportamento não observado pelo modelo durante o treinamento é realizado e, portanto, pudemos detectar a presença de estimulação apenas analisando as flutuações desse índice que indicam quando o rato está executando comportamentos que não estão presentes na base aprendida

    Structural and molecular interrogation of intact biological systems

    Get PDF
    Obtaining high-resolution information from a complex system, while maintaining the global perspective needed to understand system function, represents a key challenge in biology. Here we address this challenge with a method (termed CLARITY) for the transformation of intact tissue into a nanoporous hydrogel-hybridized form (crosslinked to a three-dimensional network of hydrophilic polymers) that is fully assembled but optically transparent and macromolecule-permeable. Using mouse brains, we show intact-tissue imaging of long-range projections, local circuit wiring, cellular relationships, subcellular structures, protein complexes, nucleic acids and neurotransmitters. CLARITY also enables intact-tissue in situ hybridization, immunohistochemistry with multiple rounds of staining and de-staining in non-sectioned tissue, and antibody labelling throughout the intact adult mouse brain. Finally, we show that CLARITY enables fine structural analysis of clinical samples, including non-sectioned human tissue from a neuropsychiatric-disease setting, establishing a path for the transmutation of human tissue into a stable, intact and accessible form suitable for probing structural and molecular underpinnings of physiological function and disease

    Visualizing classification of natural video sequences using sparse, hierarchical models of cortex.

    Get PDF
    Recent work on hierarchical models of visual cortex has reported state-of-the-art accuracy on whole-scene labeling using natural still imagery. This raises the question of whether the reported accuracy may be due to the sophisticated, non-biological back-end supervised classifiers typically used (support vector machines) and/or the limited number of images used in these experiments. In particular, is the model classifying features from the object or the background? Previous work (Landecker, Brumby, et al., COSYNE 2010) proposed tracing the spatial support of a classifier’s decision back through a hierarchical cortical model to determine which parts of the image contributed to the classification, compared to the positions of objects in the scene. In this way, we can go beyond standard measures of accuracy to provide tools for visualizing and analyzing high-level object classification. We now describe new work exploring the extension of these ideas to detection of objects in video sequences of natural scenes
    corecore