12 research outputs found

    A Novel Spike-Wave Discharge Detection Framework Based on the Morphological Characteristics of Brain Electrical Activity Phase Space in an Animal Model

    Get PDF
    Background: Animal models of absence epilepsy are widely used in childhood absence epilepsy studies. Absence seizures appear in the brain’s electrical activity as a specific spike wave discharge (SWD) pattern. Reviewing long-term brain electrical activity is time-consuming and automatic methods are necessary. On the other hand, nonlinear techniques such as phase space are effective in brain electrical activity analysis. In this study, we present a novel SWD-detection framework based on the geometrical characteristics of the phase space.Methods: The method consists of the following steps: (1) Rat stereotaxic surgery and cortical electrode implantation, (2) Long-term brain electrical activity recording, (3) Phase space reconstruction, (4) Extracting geometrical features such as volume, occupied space, and curvature of brain signal trajectories, and (5) Detecting SDWs based on the thresholding method. We evaluated the approach with the accuracy of the SWDs detection method.Results: It has been demonstrated that the features change significantly in transition from a normal state to epileptic seizures. The proposed approach detected SWDs with 98% accuracy.Conclusion: The result supports that nonlinear approaches can identify the dynamics of brain electrical activity signals

    Detection and Prediction of Absence Seizures Based on Nonlinear Analysis of the EEG in Wag/Rij Animal Model

    Get PDF
     Background: Epilepsy is a common neurological disorder with a prevalence of 1% of the world population. Absence epilepsy is a form of generalized seizures with Spike wave discharge in EEG. Epileptic patients have frequent absence seizures that cause immediate loss of consciousness.Methods: In this study, it has been tried to explore whether EEG changes can effectively detect epilepsy in animal model applying non-linear features. To predict the occurrence of absence epilepsy, a long-term EEG signal has been recorded from frontal cortex in seven Wag/Rij rats. After preprocessing, the data was transferred to the phase space to extract the brain system dynamic and geometric properties of this space. Finally, the ability of each features to predict and detect absence epilepsy with two criteria of predictive time and the accuracy of detection and its results were compared with previous studies.Results: The results indicate that the brain system dynamic changes during the transition from free-seizure to pre-seizure and then seizure. Proposed approach diagnostic characteristics yielded 97% accuracy of absence epilepsy diagnosis indicating that due to the nonlinear and complex nature of the system and the brain signal, the use of methods consistent with this nature is important in understanding the dynamic transfer between different epileptic seizures.Conclusion: By changing the state of the absence Seizures, the dynamics are changing, and the results of this research can be useful in real-time applications such as predicting epileptic seizures

    Real-time epileptic seizure detection on intra-cranial rat data using reservoir computing

    No full text
    In this paper it is shown that Reservoir Computing can be successfully applied to perform real-time detection of epileptic seizures in Electroencephalograms (EEGs). Absence and tonic-clonic seizures are detected on intracranial EEG coming from rats. This resulted in an area under the Receiver Operating Characteristics (ROC) curve of about 0.99 on the data that was used. For absences an average detection delay of 0.3s was noted, for tonic-clonic seizures this was 1.5s. Since it was possible to process 15h of data on an average computer in 14.5 minutes all conditions are met for a fast and reliable real-time detection system

    Structure, Dynamics and Self-Organization in Recurrent Neural Networks: From Machine Learning to Theoretical Neuroscience

    Get PDF
    At a first glance, artificial neural networks, with engineered learning algorithms and carefully chosen nonlinearities, are nothing like the complicated self-organized spiking neural networks studied by theoretical neuroscientists. Yet, both adapt to their inputs, keep information from the past in their state space and are able of learning, implying that some information processing principles should be common to both. In this thesis we study those principles by incorporating notions of systems theory, statistical physics and graph theory into artificial neural networks and theoretical neuroscience models. % TO DO: What is different in this thesis? -> classical signal processing with complex systems on top The starting point for this thesis is \ac{RC}, a learning paradigm used both in machine learning\cite{jaeger2004harnessing} and in theoretical neuroscience\cite{maass2002real}. A neural network in \ac{RC} consists of two parts, a reservoir – a directed and weighted network of neurons that projects the input time series onto a high dimensional space – and a readout which is trained to read the state of the neurons in the reservoir and combine them linearly to give the desired output. In classical \ac{RC}, the reservoir is randomly initialized and left untrained, which alleviates the training costs in comparison to other recurrent neural networks. However, this lack of training implies that reservoirs are not adapted to specific tasks and thus their performance is often lower than that of other neural networks. Our contribution has been to show how knowledge about a task can be integrated into the reservoir architecture, so that reservoirs can be tailored to specific problems without training. We do this design by identifying two features that are useful for machine learning: the memory of the reservoir and its power spectra. First we show that the correlations between neurons limit the capacity of the reservoir to retain traces of previous inputs, and demonstrate that those correlations are controlled by moduli of the eigenvalues of the adjacency matrix of the reservoir. Second, we prove that when the reservoir resonates at the frequencies that are present on the desired output signal, the performance of the readout increases. Knowing the features of the reservoir dynamics that we need, the next question is how to impose them. The simplest way to design a network with that resonates at a certain frequency is by adding cycles, which act as feedback loops, but this also induces correlations and hence memory modifications. To disentangle the frequencies and the memory design, we studied how the addition of cycles modifies the eigenvalues in the adjacency matrix of the network. Surprisingly, the shape of the eigenvalues is quite beautiful \cite{aceituno2019universal} and can be characterized using random matrix theory tools. Combining this knowledge with our result relating eigenvalues and correlations, we designed an heuristic that tailors reservoirs to specific tasks and showed that it improves upon state of the art \ac{RC} in three different machine learning tasks. Although this idea works in the machine learning version of \ac{RC}, there is one fundamental problem when we try to translate to the world of theoretical neuroscience: the proposed frequency adaptation requires prior knowledge of the task, which might not be plausible in a biological neural network. Therefore the following questions are whether those resonances can emerge by unsupervised learning, and which kind of learning rules would be required. Remarkably, these resonances can be induced by the well-known Spike Time-Dependent Plasticity (STDP) combined with homeostatic mechanisms. We show this by deriving two self-consistent equations: one where the activity of every neuron can be calculated from its synaptic weights and its external inputs and a second one where the synaptic weights can be obtained from the neural activity. By considering spatio-temporal symmetries in our inputs we obtained two families of solutions to those equations where a periodic input is enhanced by the neural network after STDP. This approach shows that periodic and quasiperiodic inputs can induce resonances that agree with the aforementioned \ac{RC} theory. Those results, although rigorous, are expressed on a language of statistical physics and cannot be easily tested or verified in real, scarce data. To make them more accessible to the neuroscience community we showed that latency reduction, a well-known effect of STDP\cite{song2000competitive} which has been experimentally observed \cite{mehta2000experience}, generates neural codes that agree with the self-consistency equations and their solutions. In particular, this analysis shows that metabolic efficiency, synchronization and predictions can emerge from that same phenomena of latency reduction, thus closing the loop with our original machine learning problem. To summarize, this thesis exposes principles of learning recurrent neural networks that are consistent with adaptation in the nervous system and also improve current machine learning methods. This is done by leveraging features of the dynamics of recurrent neural networks such as resonances and correlations in machine learning problems, then imposing the required dynamics into reservoir computing through control theory notions such as feedback loops and spectral analysis. Then we assessed the plausibility of such adaptation in biological networks, deriving solutions from self-organizing processes that are biologically plausible and align with the machine learning prescriptions. Finally, we relate those processes to learning rules in biological neurons, showing how small local adaptations of the spike times can lead to neural codes that are efficient and can be interpreted in machine learning terms

    Detection of epileptic seizures: the reservoir computing approach

    Get PDF
    corecore