212 research outputs found

    Robust spatial memory maps encoded in networks with transient connections

    Full text link
    The spiking activity of principal cells in mammalian hippocampus encodes an internalized neuronal representation of the ambient space---a cognitive map. Once learned, such a map enables the animal to navigate a given environment for a long period. However, the neuronal substrate that produces this map remains transient: the synaptic connections in the hippocampus and in the downstream neuronal networks never cease to form and to deteriorate at a rapid rate. How can the brain maintain a robust, reliable representation of space using a network that constantly changes its architecture? Here, we demonstrate, using novel Algebraic Topology techniques, that cognitive map's stability is a generic, emergent phenomenon. The model allows evaluating the effect produced by specific physiological parameters, e.g., the distribution of connections' decay times, on the properties of the cognitive map as a whole. It also points out that spatial memory deterioration caused by weakening or excessive loss of the synaptic connections may be compensated by simulating the neuronal activity. Lastly, the model explicates functional importance of the complementary learning systems for processing spatial information at different levels of spatiotemporal granularity, by establishing three complementary timescales at which spatial information unfolds. Thus, the model provides a principal insight into how can the brain develop a reliable representation of the world, learn and retain memories despite complex plasticity of the underlying networks and allows studying how instabilities and memory deterioration mechanisms may affect learning process.Comment: 24 pages, 10 figures, 4 supplementary figure

    Geometry Helps to Compare Persistence Diagrams

    Full text link
    Exploiting geometric structure to improve the asymptotic complexity of discrete assignment problems is a well-studied subject. In contrast, the practical advantages of using geometry for such problems have not been explored. We implement geometric variants of the Hopcroft--Karp algorithm for bottleneck matching (based on previous work by Efrat el al.) and of the auction algorithm by Bertsekas for Wasserstein distance computation. Both implementations use k-d trees to replace a linear scan with a geometric proximity query. Our interest in this problem stems from the desire to compute distances between persistence diagrams, a problem that comes up frequently in topological data analysis. We show that our geometric matching algorithms lead to a substantial performance gain, both in running time and in memory consumption, over their purely combinatorial counterparts. Moreover, our implementation significantly outperforms the only other implementation available for comparing persistence diagrams.Comment: 20 pages, 10 figures; extended version of paper published in ALENEX 201

    Homology and Robustness of Level and Interlevel Sets

    Full text link
    Given a function f: \Xspace \to \Rspace on a topological space, we consider the preimages of intervals and their homology groups and show how to read the ranks of these groups from the extended persistence diagram of ff. In addition, we quantify the robustness of the homology classes under perturbations of ff using well groups, and we show how to read the ranks of these groups from the same extended persistence diagram. The special case \Xspace = \Rspace^3 has ramifications in the fields of medical imaging and scientific visualization

    Quantifying Transversality by Measuring the Robustness of Intersections

    Full text link
    By definition, transverse intersections are stable under infinitesimal perturbations. Using persistent homology, we extend this notion to a measure. Given a space of perturbations, we assign to each homology class of the intersection its robustness, the magnitude of a perturbations in this space necessary to kill it, and prove that robustness is stable. Among the applications of this result is a stable notion of robustness for fixed points of continuous mappings and a statement of stability for contours of smooth mappings

    Parametrized Homology via Zigzag Persistence

    Get PDF
    This paper develops the idea of homology for 1-parameter families of topological spaces. We express parametrized homology as a collection of real intervals with each corresponding to a homological feature supported over that interval or, equivalently, as a persistence diagram. By defining persistence in terms of finite rectangle measures, we classify barcode intervals into four classes. Each of these conveys how the homological features perish at both ends of the interval over which they are defined

    Dualities in persistent (co)homology

    Full text link
    We consider sequences of absolute and relative homology and cohomology groups that arise naturally for a filtered cell complex. We establish algebraic relationships between their persistence modules, and show that they contain equivalent information. We explain how one can use the existing algorithm for persistent homology to process any of the four modules, and relate it to a recently introduced persistent cohomology algorithm. We present experimental evidence for the practical efficiency of the latter algorithm.Comment: 16 pages, 3 figures, submitted to the Inverse Problems special issue on Topological Data Analysi

    Communication-Avoiding Optimization Methods for Distributed Massive-Scale Sparse Inverse Covariance Estimation

    Full text link
    Across a variety of scientific disciplines, sparse inverse covariance estimation is a popular tool for capturing the underlying dependency relationships in multivariate data. Unfortunately, most estimators are not scalable enough to handle the sizes of modern high-dimensional data sets (often on the order of terabytes), and assume Gaussian samples. To address these deficiencies, we introduce HP-CONCORD, a highly scalable optimization method for estimating a sparse inverse covariance matrix based on a regularized pseudolikelihood framework, without assuming Gaussianity. Our parallel proximal gradient method uses a novel communication-avoiding linear algebra algorithm and runs across a multi-node cluster with up to 1k nodes (24k cores), achieving parallel scalability on problems with up to ~819 billion parameters (1.28 million dimensions); even on a single node, HP-CONCORD demonstrates scalability, outperforming a state-of-the-art method. We also use HP-CONCORD to estimate the underlying dependency structure of the brain from fMRI data, and use the result to identify functional regions automatically. The results show good agreement with a clustering from the neuroscience literature.Comment: Main paper: 15 pages, appendix: 24 page

    In situ and in-transit analysis of cosmological simulations

    Get PDF
    Modern cosmological simulations have reached the trillion-element scale, rendering data storage and subsequent analysis formidable tasks. To address this circumstance, we present a new MPI-parallel approach for analysis of simulation data while the simulation runs, as an alternative to the traditional workflow consisting of periodically saving large data sets to disk for subsequent 'offline' analysis. We demonstrate this approach in the compressible gasdynamics/N-body code Nyx, a hybrid MPI + OpenMP code based on the BoxLib framework, used for large-scale cosmological simulations. We have enabled on-the-fly workflows in two different ways: one is a straightforward approach consisting of all MPI processes periodically halting the main simulation and analyzing each component of data that they own ('in situ'). The other consists of partitioning processes into disjoint MPI groups, with one performing the simulation and periodically sending data to the other 'sidecar' group, which post-processes it while the simulation continues ('in-transit'). The two groups execute their tasks asynchronously, stopping only to synchronize when a new set of simulation data needs to be analyzed. For both the in situ and in-transit approaches, we experiment with two different analysis suites with distinct performance behavior: one which finds dark matter halos in the simulation using merge trees to calculate the mass contained within iso-density contours, and another which calculates probability distribution functions and power spectra of various fields in the simulation. Both are common analysis tasks for cosmology, and both result in summary statistics significantly smaller than the original data set. We study the behavior of each type of analysis in each workflow in order to determine the optimal configuration for the different data analysis algorithms
    corecore