18 research outputs found

    Isometric Representations in Neural Networks Improve Robustness

    Full text link
    Artificial and biological agents cannon learn given completely random and unstructured data. The structure of data is encoded in the metric relationships between data points. In the context of neural networks, neuronal activity within a layer forms a representation reflecting the transformation that the layer implements on its inputs. In order to utilize the structure in the data in a truthful manner, such representations should reflect the input distances and thus be continuous and isometric. Supporting this statement, recent findings in neuroscience propose that generalization and robustness are tied to neural representations being continuously differentiable. In machine learning, most algorithms lack robustness and are generally thought to rely on aspects of the data that differ from those that humans use, as is commonly seen in adversarial attacks. During cross-entropy classification, the metric and structural properties of network representations are usually broken both between and within classes. This side effect from training can lead to instabilities under perturbations near locations where such structure is not preserved. One of the standard solutions to obtain robustness is to add ad hoc regularization terms, but to our knowledge, forcing representations to preserve the metric structure of the input data as a stabilising mechanism has not yet been studied. In this work, we train neural networks to perform classification while simultaneously maintaining within-class metric structure, leading to isometric within-class representations. Such network representations turn out to be beneficial for accurate and robust inference. By stacking layers with this property we create a network architecture that facilitates hierarchical manipulation of internal neural representations. Finally, we verify that isometric regularization improves the robustness to adversarial attacks on MNIST.Comment: 14 pages, 4 figure

    Numerical stability of a scalar neural-field model with a sigmoidal firing-rate function

    Get PDF
    Many neural-field models in neuroscience mimic the all or nothing behavior of a neuron firing an action potential. The neural-field model considered in this study is a spatio-temporal scalar neural-field (NFL) model given as a partial integro differential equation (PIDE). The model yields temporal changes in probability of neural activity in a given spatial point. A point has a high probability of activity when the solution of the NFL model is above a firing threshold. The sensitivity to change in probability is measured by a steepness parameter β. Various numerical methods have been employed on this (and similar) type(s) of neural-field model(s) without analysis of numerical convergence, stability and consistency. The aim of this thesis is to obtain a better understanding of the NFL model's well-posedness theory and its biophysical background. Further to analyze the numerical convergence theory of the NFL model approximated by simple explicit numerical methods. To obtain a comprehensive overview of the NFL model, we review its biophysical derivation and discuss two proposed formalisms with respect to numerical analysis and well-posedness theory. Further, we review a global well-posedness proof of the Cauchy formulated NFL model in a Banach space of continuous functions. We further perform an analysis of the numerical error obtained in the forward Euler and Heun's second order Runge--Kutta (RK2) method. Finally we illustrate numerical behavior by experiments applying the forward Euler and an explicit RK4 method. Presented analytical work indicate stiffness (a necessity for a significantly small temporal stepping length) in the NFL model when approximated by the forward Euler method. This is due to a dependency between the numerical error and the steepness parameter β. The RK2 truncation error contains β^2, indicating that β^N is contained in the RKN truncation error. Thus, by increasing the order of the RK method, we predict no remedy with respect to stiffness. Performed numerical experiments on a simplified version of the NFL model demonstrate stiffness in the proximity of the firing threshold for moderately sized steepness parameters β. We further demonstrate a divergence between the exact and the approximated solution of a slightly less simplified version of the NFL model. This happens when the approximated solution is shifted from one basin of attraction to another, giving rise to a large numerical error. In addition, we observe spurious solutions in the form of false oscillations and an erroneous fixed point. Thus indicating a possibility for numerical solutions of the NFL model to be rather arbitrary when the temporal stepping lengths are not carefully selected. Presented results indicate serious numerical stability issues in the NFL model. We suggest further evaluation and development of more sophisticated numerical methods for future application of this (and similar) type(s) of neural-field model(s). Finally, we argue the simplicity of the NFL model to be undermined by the complications observed in numerical approximations. We propose focus to be directed towards developing more robust field-models with respect to numerical analysis rather than developing more complicated numerical methods

    Dissecting neuronal circuits for navigation in experiments and models

    No full text
    The sense of space is important for movement and the formation of memories. Grid cells in the entorhinal cortex are believed to be key components for spatial reasoning. Established theory suggests that brain waves in the theta frequency and stable connectivity are essential for the hexagonal activity pattern of grid cells. Lepperød and colleagues have used computational models and experiments to assess how the pattern of grid cells emerge, and how their activity remains stable across time and space. Using optogenetics to control oscillatory activity in inhibitory cells in the medial septal area, he found that pacing brain waves at different frequencies vanished theta oscillations, while the spatial pattern of grid cells remained stable. Results strongly indicate that spatial and temporal activity of grid cells can be dissociated and that oscillatory activity of grid cells is unlikely to cause their remarkable hexagonal activity pattern. Grid cells show exceptional stability and Lepperød proposed that an extracellular matrix called perineuronal nets stabilize connectivity among grid cells by regulating synaptic plasticity. Breaking down these nets by injecting a bacterial enzyme known as Chondroitinase ABC led to destabilized grid cell patterns in novel environments and reduced pairwise correlations. Results suggest that perineuronal nets support the stable activity of grid cells. The work indicates further that grid cells emerge due to connectivity, but this remains unresolved. Using spiking activity to infer connectivity is difficult and naive inference may reflect spurious correlations when neurons get common input. To avoid this problem, Lepperød used the instrumental variable technique, commonly used in econometrics. Combining recording of neurons together with optogenetics the method allows inference of causal interactions between neurons. The method shows promising results in simulations where it remains causally valid while naive methods fail

    Numerisk stabilitet av en skalar nevrofeltmodell med s-formet fyringsratefunksjon

    No full text
    Many neural-field models in neuroscience mimic the all or nothing behavior of a neuron firing an action potential. The neural-field model considered in this study is a spatio-temporal scalar neural-field (NFL) model given as a partial integro differential equation (PIDE). The model yields temporal changes in probability of neural activity in a given spatial point. A point has a high probability of activity when the solution of the NFL model is above a firing threshold. The sensitivity to change in probability is measured by a steepness parameter β. Various numerical methods have been employed on this (and similar) type(s) of neural-field model(s) without analysis of numerical convergence, stability and consistency. The aim of this thesis is to obtain a better understanding of the NFL model's well-posedness theory and its biophysical background. Further to analyze the numerical convergence theory of the NFL model approximated by simple explicit numerical methods. To obtain a comprehensive overview of the NFL model, we review its biophysical derivation and discuss two proposed formalisms with respect to numerical analysis and well-posedness theory. Further, we review a global well-posedness proof of the Cauchy formulated NFL model in a Banach space of continuous functions. We further perform an analysis of the numerical error obtained in the forward Euler and Heun's second order Runge--Kutta (RK2) method. Finally we illustrate numerical behavior by experiments applying the forward Euler and an explicit RK4 method. Presented analytical work indicate stiffness (a necessity for a significantly small temporal stepping length) in the NFL model when approximated by the forward Euler method. This is due to a dependency between the numerical error and the steepness parameter β. The RK2 truncation error contains β^2, indicating that β^N is contained in the RKN truncation error. Thus, by increasing the order of the RK method, we predict no remedy with respect to stiffness. Performed numerical experiments on a simplified version of the NFL model demonstrate stiffness in the proximity of the firing threshold for moderately sized steepness parameters β. We further demonstrate a divergence between the exact and the approximated solution of a slightly less simplified version of the NFL model. This happens when the approximated solution is shifted from one basin of attraction to another, giving rise to a large numerical error. In addition, we observe spurious solutions in the form of false oscillations and an erroneous fixed point. Thus indicating a possibility for numerical solutions of the NFL model to be rather arbitrary when the temporal stepping lengths are not carefully selected. Presented results indicate serious numerical stability issues in the NFL model. We suggest further evaluation and development of more sophisticated numerical methods for future application of this (and similar) type(s) of neural-field model(s). Finally, we argue the simplicity of the NFL model to be undermined by the complications observed in numerical approximations. We propose focus to be directed towards developing more robust field-models with respect to numerical analysis rather than developing more complicated numerical methods.M-M

    Spikeometric: Linear Non-Linear Cascade Spiking Neural Networks with Pytorch Geometric

    No full text
    The spikeometric package is a framework for simulating spiking neural networks (SNNs) using generalized linear models (GLMs) and Linear-Nonlinear-Poisson models (LNPs) in Python. It is built on top of the PyTorch Geometric package and makes use of their powerful graph neural network (GNN) modules and efficient graph representation. It is designed to be fast, flexible and easy to use, and is intended for research purposes

    Inferring causal connectivity from pairwise recordings and optogenetics.

    No full text
    To understand the neural mechanisms underlying brain function, neuroscientists aim to quantify causal interactions between neurons, for instance by perturbing the activity of neuron A and measuring the effect on neuron B. Recently, manipulating neuron activity using light-sensitive opsins, optogenetics, has increased the specificity of neural perturbation. However, using widefield optogenetic interventions, multiple neurons are usually perturbed, producing a confound-any of the stimulated neurons can have affected the postsynaptic neuron making it challenging to discern which neurons produced the causal effect. Here, we show how such confounds produce large biases in interpretations. We explain how confounding can be reduced by combining instrumental variables (IV) and difference in differences (DiD) techniques from econometrics. Combined, these methods can estimate (causal) effective connectivity by exploiting the weak, approximately random signal resulting from the interaction between stimulation and the absolute refractory period of the neuron. In simulated neural networks, we find that estimates using ideas from IV and DiD outperform naïve techniques suggesting that methods from causal inference can be useful to disentangle neural interactions in the brain

    Inferring causal connectivity from pairwise recordings and optogenetics

    No full text
    To understand the neural mechanisms underlying brain function, neuroscientists aim to quantify causal interactions between neurons, for instance by perturbing the activity of neuron A and measuring the effect on neuron B. Recently, manipulating neuron activity using light-sensitive opsins, optogenetics, has increased the specificity of neural perturbation. However, using widefield optogenetic interventions, multiple neurons are usually perturbed, producing a confound -- any of the stimulated neurons can have affected the postsynaptic neuron making it challenging to discern which neurons produced the causal effect. Here, we show how such confounds produce large biases in interpretations. We explain how confounding can be reduced by combining instrumental variables (IV) and difference in differences (DiD) techniques from econometrics. Combined, these methods can estimate (causal) effective connectivity by exploiting the weak, approximately random signal resulting from the interaction between stimulation and the absolute refractory period of the neuron. In simulated neural networks, we find that estimates using ideas from IV and DiD outperform naive techniques suggesting that methods from causal inference can be useful to disentangle neural interactions in the brain

    MIIND : A Model-Agnostic Simulator of Neural Populations

    Get PDF
    MIIND is a software platform for easily and efficiently simulating the behaviour of interacting populations of point neurons governed by any 1D or 2D dynamical system. The simulator is entirely agnostic to the underlying neuron model of each population and provides an intuitive method for controlling the amount of noise which can significantly affect the overall behaviour. A network of populations can be set up quickly and easily using MIIND's XML-style simulation file format describing simulation parameters such as how populations interact, transmission delays, post-synaptic potentials, and what output to record. During simulation, a visual display of each population's state is provided for immediate feedback of the behaviour and population activity can be output to a file or passed to a Python script for further processing. The Python support also means that MIIND can be integrated into other software such as The Virtual Brain. MIIND's population density technique is a geometric and visual method for describing the activity of each neuron population which encourages a deep consideration of the dynamics of the neuron model and provides insight into how the behaviour of each population is affected by the behaviour of its neighbours in the network. For 1D neuron models, MIIND performs far better than direct simulation solutions for large populations. For 2D models, performance comparison is more nuanced but the population density approach still confers certain advantages over direct simulation. MIIND can be used to build neural systems that bridge the scales between an individual neuron model and a population network. This allows researchers to maintain a plausible path back from mesoscopic to microscopic scales while minimising the complexity of managing large numbers of interconnected neurons. In this paper, we introduce the MIIND system, its usage, and provide implementation details where appropriate.ISSN:1662-519

    Open source modules for tracking animal behavior and closed-loop stimulation based on Open Ephys and Bonsai

    No full text
    Objective: A major goal in systems neuroscience is to determine the causal relationship between neural activity and behavior. To this end, methods that combine monitoring neural activity, behavioral tracking, and targeted manipulation of neurons in closed-loop are powerful tools. However, commercial systems that allow these types of experiments are usually expensive and rely on non-standardized data formats and proprietary software which may hinder user-modifications for specific needs. In order to promote reproducibility and data-sharing in science, transparent software and standardized data formats are an advantage. Here, we present an open source, low-cost, adaptable, and easy to set-up system for combined behavioral tracking, electrophysiology, and closed-loop stimulation. Approach: Based on the Open Ephys system (www.open-ephys.org) we developed multiple modules to include real-time tracking and behavior-based closed-loop stimulation. We describe the equipment and provide a step-by-step guide to set up the system. Combining the open source software Bonsai (bonsai-rx.org) for analyzing camera images in real time with the newly developed modules in Open Ephys, we acquire position information, visualize tracking, and perform tracking-based closed-loop stimulation experiments. To analyze the acquired data we provide an open source file reading package in Python. Main results: The system robustly visualizes real-time tracking and reliably recovers tracking information recorded from a range of sampling frequencies (30–1000 Hz). We combined electrophysiology with the newly-developed tracking modules in Open Ephys to record place cell and grid cell activity in the hippocampus and in the medial entorhinal cortex, respectively. Moreover, we present a case in which we used the system for closed-loop optogenetic stimulation of entorhinal grid cells. Significance: Expanding the Open Ephys system to include animal tracking and behavior-based closed-loop stimulation extends the availability of high-quality, low-cost experimental setup within standardized data formats serving the neuroscience community

    Optogenetic pacing of medial septum parvalbumin-positive cells disrupts temporal but not spatial firing in grid cells

    No full text
    Grid cells in the medial entorhinal cortex (MEC) exhibit remarkable spatial activity patterns with spikes coordinated by theta oscillations driven by the medial septal area (MSA). Spikes from grid cells progress relative to the theta phase in a phenomenon called phase precession, which is suggested as essential to create the spatial periodicity of grid cells. Here, we show that optogenetic activation of parvalbumin-positive (PV+) cells in the MSA enabled selective pacing of local field potential (LFP) oscillations in MEC. During optogenetic stimulation, the grid cells were locked to the imposed pacing frequency but kept their spatial patterns. Phase precession was abolished, and speed information was no longer reflected in the LFP oscillations but was still carried by rate coding of individual MEC neurons. Together, these results support that theta oscillations are not critical to the spatial pattern of grid cells and do not carry a crucial velocity signal
    corecore