1,975 research outputs found

    From Neuronal cost-based metrics towards sparse coded signals classification

    Get PDF
    International audienceSparse signal decomposition are keys to efficient compression, storage and denoising, but they lack appropriate methods to exploit this sparsity for a classification purpose. Sparse coding methods based on dictionary learning may result in spikegrams, a sparse and temporal representation of signals by a raster of kernel occurrence through time. This paper proposes a method for coupling spike train cost based metrics (from neuroscience) with a spikegram sparse decompositions for clustering multivariate signals. Experiments on character trajectories, recorded by sensors from natural handwriting, prove the validity of the approach, compared with currently available classification performance in literature

    Graph analysis of functional brain networks: practical issues in translational neuroscience

    Full text link
    The brain can be regarded as a network: a connected system where nodes, or units, represent different specialized regions and links, or connections, represent communication pathways. From a functional perspective communication is coded by temporal dependence between the activities of different brain areas. In the last decade, the abstract representation of the brain as a graph has allowed to visualize functional brain networks and describe their non-trivial topological properties in a compact and objective way. Nowadays, the use of graph analysis in translational neuroscience has become essential to quantify brain dysfunctions in terms of aberrant reconfiguration of functional brain networks. Despite its evident impact, graph analysis of functional brain networks is not a simple toolbox that can be blindly applied to brain signals. On the one hand, it requires a know-how of all the methodological steps of the processing pipeline that manipulates the input brain signals and extract the functional network properties. On the other hand, a knowledge of the neural phenomenon under study is required to perform physiological-relevant analysis. The aim of this review is to provide practical indications to make sense of brain network analysis and contrast counterproductive attitudes

    Measuring spike train synchrony

    Get PDF
    Estimating the degree of synchrony or reliability between two or more spike trains is a frequent task in both experimental and computational neuroscience. In recent years, many different methods have been proposed that typically compare the timing of spikes on a certain time scale to be fixed beforehand. Here, we propose the ISI-distance, a simple complementary approach that extracts information from the interspike intervals by evaluating the ratio of the instantaneous frequencies. The method is parameter free, time scale independent and easy to visualize as illustrated by an application to real neuronal spike trains obtained in vitro from rat slices. In a comparison with existing approaches on spike trains extracted from a simulated Hindemarsh-Rose network, the ISI-distance performs as well as the best time-scale-optimized measure based on spike timing.Comment: 11 pages, 13 figures; v2: minor modifications; v3: minor modifications, added link to webpage that includes the Matlab Source Code for the method (http://inls.ucsd.edu/~kreuz/Source-Code/Spike-Sync.html

    Heuristic Spike Sorting Tuner (HSST), a framework to determine optimal parameter selection for a generic spike sorting algorithm

    Get PDF
    Extracellular microelectrodes frequently record neural activity from more than one neuron in the vicinity of the electrode. The process of labeling each recorded spike waveform with the identity of its source neuron is called spike sorting and is often approached from an abstracted statistical perspective. However, these approaches do not consider neurophysiological realities and may ignore important features that could improve the accuracy of these methods. Further, standard algorithms typically require selection of at least one free parameter, which can have significant effects on the quality of the output. We describe a Heuristic Spike Sorting Tuner (HSST) that determines the optimal choice of the free parameters for a given spike sorting algorithm based on the neurophysiological qualification of unit isolation and signal discrimination. A set of heuristic metrics are used to score the output of a spike sorting algorithm over a range of free parameters resulting in optimal sorting quality. We demonstrate that these metrics can be used to tune parameters in several spike sorting algorithms. The HSST algorithm shows robustness to variations in signal to noise ratio, number and relative size of units per channel. Moreover, the HSST algorithm is computationally efficient, operates unsupervised, and is parallelizable for batch processing

    Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks

    Full text link
    A long-term goal of AI is to produce agents that can learn a diversity of skills throughout their lifetimes and continuously improve those skills via experience. A longstanding obstacle towards that goal is catastrophic forgetting, which is when learning new information erases previously learned information. Catastrophic forgetting occurs in artificial neural networks (ANNs), which have fueled most recent advances in AI. A recent paper proposed that catastrophic forgetting in ANNs can be reduced by promoting modularity, which can limit forgetting by isolating task information to specific clusters of nodes and connections (functional modules). While the prior work did show that modular ANNs suffered less from catastrophic forgetting, it was not able to produce ANNs that possessed task-specific functional modules, thereby leaving the main theory regarding modularity and forgetting untested. We introduce diffusion-based neuromodulation, which simulates the release of diffusing, neuromodulatory chemicals within an ANN that can modulate (i.e. up or down regulate) learning in a spatial region. On the simple diagnostic problem from the prior work, diffusion-based neuromodulation 1) induces task-specific learning in groups of nodes and connections (task-specific localized learning), which 2) produces functional modules for each subtask, and 3) yields higher performance by eliminating catastrophic forgetting. Overall, our results suggest that diffusion-based neuromodulation promotes task-specific localized learning and functional modularity, which can help solve the challenging, but important problem of catastrophic forgetting
    • …
    corecore