122 research outputs found

    Deep learning-assisted high-throughput analysis of freeze-fracture replica images applied to glutamate receptors and calcium channels at hippocampal synapses

    Get PDF
    The molecular anatomy of synapses defines their characteristics in transmission and plasticity. Precise measurements of the number and distribution of synaptic proteins are important for our understanding of synapse heterogeneity within and between brain regions. Freeze–fracture replica immunogold electron microscopy enables us to analyze them quantitatively on a two-dimensional membrane surface. Here, we introduce Darea software, which utilizes deep learning for analysis of replica images and demonstrate its usefulness for quick measurements of the pre- and postsynaptic areas, density and distribution of gold particles at synapses in a reproducible manner. We used Darea for comparing glutamate receptor and calcium channel distributions between hippocampal CA3-CA1 spine synapses on apical and basal dendrites, which differ in signaling pathways involved in synaptic plasticity. We found that apical synapses express a higher density of α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptors and a stronger increase of AMPA receptors with synaptic size, while basal synapses show a larger increase in N-methyl-D-aspartate (NMDA) receptors with size. Interestingly, AMPA and NMDA receptors are segregated within postsynaptic sites and negatively correlated in density among both apical and basal synapses. In the presynaptic sites, Cav2.1 voltage-gated calcium channels show similar densities in apical and basal synapses with distributions consistent with an exclusion zone model of calcium channel-release site topography

    Whole brain Probabilistic Generative Model toward Realizing Cognitive Architecture for Developmental Robots

    Get PDF
    Building a humanlike integrative artificial cognitive system, that is, an artificial general intelligence, is one of the goals in artificial intelligence and developmental robotics. Furthermore, a computational model that enables an artificial cognitive system to achieve cognitive development will be an excellent reference for brain and cognitive science. This paper describes the development of a cognitive architecture using probabilistic generative models (PGMs) to fully mirror the human cognitive system. The integrative model is called a whole-brain PGM (WB-PGM). It is both brain-inspired and PGMbased. In this paper, the process of building the WB-PGM and learning from the human brain to build cognitive architectures is described.Comment: 55 pages, 8 figures, submitted to Neural Network

    Supporting Scientific Research Through Machine and Deep Learning: Fluorescence Microscopy and Operational Intelligence Use Cases

    Get PDF
    Although the debate of what data science is has a long history and has not reached a complete consensus yet, Data Science can be summarized as the process of learning from data. Guided by the above vision, this thesis presents two independent data science projects developed in the scope of multidisciplinary applied research. The first part analyzes fluorescence microscopy images typically produced in life science experiments, where the objective is to count how many marked neuronal cells are present in each image. Aiming to automate the task for supporting research in the area, we propose a neural network architecture tuned specifically for this use case, cell ResUnet (c-ResUnet), and discuss the impact of alternative training strategies in overcoming particular challenges of our data. The approach provides good results in terms of both detection and counting, showing performance comparable to the interpretation of human operators. As a meaningful addition, we release the pre-trained model and the Fluorescent Neuronal Cells dataset collecting pixel-level annotations of where neuronal cells are located. In this way, we hope to help future research in the area and foster innovative methodologies for tackling similar problems. The second part deals with the problem of distributed data management in the context of LHC experiments, with a focus on supporting ATLAS operations concerning data transfer failures. In particular, we analyze error messages produced by failed transfers and propose a Machine Learning pipeline that leverages the word2vec language model and K-means clustering. This provides groups of similar errors that are presented to human operators as suggestions of potential issues to investigate. The approach is demonstrated on one full day of data, showing promising ability in understanding the message content and providing meaningful groupings, in line with previously reported incidents by human operators

    Organization of spatiotemporal information and relational memory in the hippocampus

    Get PDF
    This work examines the role of the hippocampus and relational memory in organizing episodic memory during navigation and reconstruction. Navigation is a critical component in most organisms’ survival. Reconstruction, on the other hand, provides an incredibly rich method of evaluating the precise information remembered by an individual after attempting to learn and remember that information. Through validating the computational framework in this work on amnesic patients with hippocampal damage, an understanding of some of the specific types of relations which rely on the hippocampus can be established. Then, this framework can be applied to a much more complex, spatiotemporal navigation and reconstruction task in healthy individuals to gain a wider perspective on the organization of episodic memory, which is known to critically rely on the hippocampus. The first experiment and associated analysis framework presented in this document (Chapter 2) uses spatial reconstruction to establish that not all types of spatial relations are impaired in hippocampal damaged patients. In particular, the arbitrary, identity-location relations (i.e. those relationships where the element being bound could have just as easily been anything) are critically impaired in hippocampal damaged patients while location information, disregarding identity, is not. The use of reconstruction in this context allows for the establishment of a set of critical computational metrics which relate to hippocampal function in reconstruction which can then be applied to other reconstruction tasks in healthy individuals to learn more about the wider structure and organization of memory. In the second experiment (Chapters 3 and 4), the methodologies which were applied to hippocampal damaged patients in the first experiment are applied to a novel Spatiotemporal Navigation Task in healthy young adults. In this task, participants are not just asked to study and reconstruct items in space, but instead, participants are asked to, in Virtual Reality, navigate space and time (via normal movement and simulated Time Travel) and study, then reconstruct the locations of events in spacetime. The computational framework established in the previous chapter is then applied to show that relational memory errors in time are far more common in this task than in space, suggesting differences in representations between these two domains even when the navigation and exploration of the domains are put on a more equal footing. Additionally, in time, these relational memory errors are far more likely to occur within a shared contextual region than should occur by chance. In fact, this error (temporal relational memory error within a context) gets worse across the first 3 trials, suggesting a systematic bias due to context. Finally, a more traditional bias, the context boundary effect (i.e. a “squishing” of within context temporal locations and “stretching” of across context temporal locations) is observed even though participants are allowed to reexplore the contexts arbitrarily, multiple times. This suggests that the context boundaries are having a profound impact on both the distance judgements and relational memory structure associated with events in spacetime. Finally, in the fourth chapter, the navigation component of the previous Spatiotemporal Navigation Task is examined to determine if changes in study time navigation and exploration relate to changes in the various test metrics discussed in the previous chapter. More rapid improvements in spatial and temporal navigation are shown to relate to more rapid improvements in memory in those domains, separably, suggesting that spatial and temporal representations may in some way be separable in this task in both the relational representations and the navigation strategies supporting those representations. Relational memory improvements are shown to be uniquely tied to changes in navigation complexity and systematicity, pointing to an interplay between in-the-moment, memory-guided decision making and subsequent relational memory efficacy. Context boundaries are suggested to act as more of a discriminatory feature (at least in this task) than one used to strengthen within-context relational memory organization accuracy as there is a significant relationship between changes in context boundary crossing and both the context boundary effect and across-context temporal relational memory errors. Finally, a preference towards exploring an otherwise temporally-flexible environment in the implied, forward order with increasing contiguity is suggested to be a critical element in improving temporal, relational, and contextual memory organization. Taken together, this work shows the richness of spatiotemporal navigation and reconstruction in observing the complex interplay between navigation in space, navigation in time and how these ultimately may relate to navigation in memory. Through embracing principled approaches to analysis of behavioral data, and the inclusion of complex behavioral mechanics (such as simulated time travel), this work extends our understanding of the role of hippocampal relational memory and overall memory organization

    Toward Understanding Visual Perception in Machines with Human Psychophysics

    Get PDF
    Over the last several years, Deep Learning algorithms have become more and more powerful. As such, they are being deployed in increasingly many areas including ones that can directly affect human lives. At the same time, regulations like the GDPR or the AI Act are putting the request and need to better understand these artificial algorithms on legal grounds. How do these algorithms come to their decisions? What limits do they have? And what assumptions do they make? This thesis presents three publications that deepen our understanding of deep convolutional neural networks (DNNs) for visual perception of static images. While all of them leverage human psychophysics, they do so in two different ways: either via direct comparison between human and DNN behavioral data or via an evaluation of the helpfulness of an explainability method. Besides insights on DNNs, these works emphasize good practices: For comparison studies, we propose a checklist on how to design, conduct and interpret experiments between different systems. And for explainability methods, our evaluations exemplify that quantitatively testing widely spread intuitions can help put their benefits in a realistic perspective. In the first publication, we test how similar DNNs are to the human visual system, and more specifically its capabilities and information processing. Our experiments reveal that DNNs (1)~can detect closed contours, (2)~perform well on an abstract visual reasoning task and (3)~correctly classify small image crops. On a methodological level, these experiments illustrate that (1)~human bias can influence our interpretation of findings, (2)~distinguishing necessary and sufficient mechanisms can be challenging, and (3)~the degree of aligning experimental conditions between systems can alter the outcome. In the second and third publications, we evaluate how helpful humans find the explainability method feature visualization. The purpose of this tool is to grant insights into the features of a DNN. To measure the general informativeness and causal understanding supported via feature visualizations, we test participants on two different psychophysical tasks. Our data unveil that humans can indeed understand the inner DNN semantics based on this explainability tool. However, other visualizations such as natural data set samples also provide useful, and sometimes even \emph{more} useful, information. On a methodological level, our work illustrates that human evaluations can adjust our expectations toward explainability methods and that different claims have to match the experiment

    Computational Unfolding of the Human Hippocampus

    Get PDF
    The hippocampal subfields are defined by their unique cytoarchitectures, which many recent studies have tried to map to human in-vivo MRI because of their promise to further our understanding of hippocampal function, or its dysfunction in disease. However, recent anatomical literature has highlighted broad inter-individual variability in hippocampal morphology and subfield locations, much of which can be attributed to different folding configurations within hippocampal (or archicortical) tissue. Inspired in part by analogous surface-based neocortical analysis methods, the current thesis aimed to develop a standardized coordinate framework, or surface-based method, that respects the topology of all hippocampal folding configurations. I developed such a coordinate framework in Chapter 2, which was initialized by detailed manual segmentations of hippocampal grey matter and high myelin laminae which are visible in 7-Tesla MRI and which separate different hippocampal folds. This framework was leveraged to i) computationally unfold the hippocampus which provided implicit topological inter-individual alignment, ii) delineate subfields with high reliability and validity, and iii) extract novel structural features of hippocampal grey matter. In Chapter 3, I applied this coordinate framework to the open source BigBrain 3D histology dataset. With this framework, I computationally extracted morphological and laminar features and showed that they are sufficient to derive hippocampal subfields in a data-driven manner. This underscores the sensitivity of these computational measures and the validity of the applied subfield definitions. Finally, the unfolding coordinate framework developed in Chapter 2 and extended in Chapter 3 requires manual detection of different tissue classes that separate folds in hippocampal grey matter. This is costly in the time and the expertise required. Thus, in Chapter 4, I applied state-of-the-art deep learning methods in the open source Human Connectome Project MRI dataset to automate this process. This allowed for scalable application of the methods described in Chapters 2, 3, and 4 to similar new datasets, with support for extensions to suit data of different modalities or resolutions. Overall, the projects presented here provide multifaceted evidence for the strengths of a surface-based approach to hippocampal analysis as developed in this thesis, and these methods are readily deployable in new neuroimaging work

    A whole brain probabilistic generative model: Toward realizing cognitive architectures for developmental robots

    Get PDF
    Building a human-like integrative artificial cognitive system, that is, an artificial general intelligence (AGI), is the holy grail of the artificial intelligence (AI) field. Furthermore, a computational model that enables an artificial system to achieve cognitive development will be an excellent reference for brain and cognitive science. This paper describes an approach to develop a cognitive architecture by integrating elemental cognitive modules to enable the training of the modules as a whole. This approach is based on two ideas: (1) brain-inspired AI, learning human brain architecture to build human-level intelligence, and (2) a probabilistic generative model (PGM)-based cognitive architecture to develop a cognitive system for developmental robots by integrating PGMs. The proposed development framework is called a whole brain PGM (WB-PGM), which differs fundamentally from existing cognitive architectures in that it can learn continuously through a system based on sensory-motor information.In this paper, we describe the rationale for WB-PGM, the current status of PGM-based elemental cognitive modules, their relationship with the human brain, the approach to the integration of the cognitive modules, and future challenges. Our findings can serve as a reference for brain studies. As PGMs describe explicit informational relationships between variables, WB-PGM provides interpretable guidance from computational sciences to brain science. By providing such information, researchers in neuroscience can provide feedback to researchers in AI and robotics on what the current models lack with reference to the brain. Further, it can facilitate collaboration among researchers in neuro-cognitive sciences as well as AI and robotics
    • …
    corecore