21 research outputs found

    Fast Robust Subject-Independent Magnetoencephalographic Source Localization Using an Artificial Neural Network

    Get PDF
    We describe a system that localizes a single dipole to reasonable accuracy from noisy magnetoencephalographic (MEG) measurements in real time. At its core is a multilayer perceptron (MLP) trained to map sensor signals and head position to dipole location. Including head position overcomes the previous need to retrain the MLP for each subject and session. The training dataset was generated by mapping randomly chosen dipoles and head positions through an analytic model and adding noise from real MEG recordings. After training, a localization took 0.7 ms with an average error of 0.90 cm. A few iterations of a Levenberg-Marquardt routine using the MLP output as its initial guess took 15 ms and improved accuracy to 0.53 cm, which approaches the natural limit on accuracy imposed by noise. We applied these methods to localize single dipole sources from MEG components isolated by blind source separation and compared the estimated locations to those generated by standard manually assisted commercial software

    Progress in blind separation of magnetoencephalographic data

    Get PDF
    The match between the physics of MEG and the assumptions of the most well developed blind source separation (BSS) algorithms (unknown instantaneous linear mixing process, many sensors compared to expected recoverable sources, large data limit) have tempted researchers to apply these algorithms to MEG data. We review some of these efforts, with particular emphasis on our own work

    Progress in blind separation of magnetoencephalographic data

    Full text link

    Allocation of Computational Resources in the Nervous System.

    Get PDF
    The nervous system integrates past information together with predictions about the future in order to produce rewarding actions for the organism. This dissertation focuses on the resources underlying these computations, and the task-dependent allocation of these resources. We present evidence that principles from optimal coding and optimal estimation account for overt and covert orienting phenomena, as observed from both behavioral experiments and neuronal recordings. First, we review behavioral measurements related to selective attention and discuss models that account for these data. We show that reallocation of resources emerges as a natural property of systems that encode their inputs efficiently under non-uniform constraints. We continue by discussing the attentional modulation of neuronal activity, and showthat: (1) Modulation of coding strategies does not require special mechanisms: it is possible to obtain dramatic modulation even when signals informing the system about fidelity requirements enter the system in a fashion indistinguishable from sensory signals. (2) Optimal coding under non-uniform fidelity requirements is sufficient to account for the firing rate modulation observed during selective attention experiments. (3) The response of a single neuron cannot bewell characterized by measurements of attentional modulation of only a single sensory stimulus. (4) The magnitude of the activity modulation depends on the capacity of the neural circuit. A later chapter discusses the neural mechanisms for resource allocation, and the relation between attentional mechanisms and receptive field formation. The remainder of the dissertation focuses on overt orienting phenomena and active perception. We present a theoretical analysis of the allocation of resources during state estimation of multiple targets with different uncertainties, together with eye-tracking experiments that confirm our predictions. We finish by discussing the implications of these results to our current understanding of orienting phenomena and the neural code

    Magnetoencephalography-based approaches to epilepsy classification

    Get PDF
    Epilepsy is a chronic central nervous system disorder characterized by recurrent seizures. Not only does epilepsy severely affect the daily life of the patient, but the risk of premature death in patients with epilepsy is three times higher than that of the normal population. Magnetoencephalography (MEG) is a non-invasive, high temporal and spatial resolution electrophysiological data that provides a valid basis for epilepsy diagnosis, and used in clinical practice to locate epileptic foci in patients with epilepsy. It has been shown that MEG helps to identify MRI-negative epilepsy, contributes to clinical decision-making in recurrent seizures after previous epilepsy surgery, that interictal MEG can provide additional localization information than scalp EEG, and complete excision of the stimulation area defined by the MEG has prognostic significance for postoperative seizure control. However, due to the complexity of the MEG signal, it is often difficult to identify subtle but critical changes in MEG through visual inspection, opening up an important area of research for biomedical engineers to investigate and implement intelligent algorithms for epilepsy recognition. At the same time, the use of manual markers requires significant time and labor costs, necessitating the development and use of computer-aided diagnosis (CAD) systems that use classifiers to automatically identify abnormal activity. In this review, we discuss in detail the results of applying various different feature extraction methods on MEG signals with different classifiers for epilepsy detection, subtype determination, and laterality classification. Finally, we also briefly look at the prospects of using MEG for epilepsy-assisted localization (spike detection, high-frequency oscillation detection) due to the unique advantages of MEG for functional area localization in epilepsy, and discuss the limitation of current research status and suggestions for future research. Overall, it is hoped that our review will facilitate the reader to quickly gain a general understanding of the problem of MEG-based epilepsy classification and provide ideas and directions for subsequent research

    Epilepsy

    Get PDF
    With the vision of including authors from different parts of the world, different educational backgrounds, and offering open-access to their published work, InTech proudly presents the latest edited book in epilepsy research, Epilepsy: Histological, electroencephalographic, and psychological aspects. Here are twelve interesting and inspiring chapters dealing with basic molecular and cellular mechanisms underlying epileptic seizures, electroencephalographic findings, and neuropsychological, psychological, and psychiatric aspects of epileptic seizures, but non-epileptic as well

    Memories, attractors, space and vowels

    Get PDF
    Higher cognitive capacities, such as navigating complex environments or learning new languages, rely on the possibility to memorize, in the brain, continuous noisy variables. Memories are generally understood to be realized, e.g. in the cortex and in the hippocampus, as configurations of activity towards which specific populations of neurons are \u201cattracted\u201d, i.e towards which they dynamically converge, if properly cued. Distinct memories are thus considered as separate attractors of the dynamics, embedded within the same neuronal connectivity structure. But what if the underlying variables are continuous, such as a position in space or the resonant frequency of a phoneme? If such variables are continuous and the experience to be retained in memory has even a minimal temporal duration, highly correlated, yet imprecisely determined values of those variables will occur at successive time instants. And if memories are idealized as point-like in time, still distinct memories will be highly correlated. How does the brain self-organize to deal with noisy correlated memories? In this thesis, we try to approach the question along three interconnected itineraries. In Part II we first ask the opposite: we derive how many uncorrelated memories a network of neurons would be able to precisely store, as discrete attractors, if the neurons were optimally connected. Then, we compare the results with those obtained when memories are allowed to be retrieved imprecisely and connections are based on self-organization. We find that a simple strategy is available in the brain to facilitate the storage of memories: it amounts to making them more sparse, i.e. to silencing those neurons which are not very active in the configuration of activity to be memorized. We observe that the more the distribution of activity in the memory is complex, the more this strategy leads to store a higher number of memories, as compared with the maximal load in networks endowed with the theoretically optimal connection weights. In part III we ask, starting from experimental observations of spatially selective cells in quasi-realistic environments, how can the brain store, as a continuous attractor, complex and irregular spatial information. We find indications that while continuous attractors, per se, are too brittle to deal with irregularities, there seem to be other mathematical objects, which we refer to as quasi-attractive continuous manifolds, which may have this function. Such objects, which emerge as soon as a tiny amount of quenched irregularity is introduced in would-be continuous attractors, seem to persist over a wide range of noise levels and then break up, in a phase transition, when the variability reaches a critical threshold, lying just above that seen in the experimental measurements. Moreover, we find that the operational range is squeezed from behind, as it were, by a third phase, in which the spatially selective units cannot dynamically converge towards a localized state. Part IV, which is more exploratory, is motivated by the frequency characteristics of vowels. We hypothesize that also phonemes of different languages could be stored as separate fixed points in the brain through a sort of two-dimensional cognitive map. In our preliminary results, we show that a continuous quasi-attractor model, trained with noisy recorded vowels, can effectively learn them through a self-organized procedure and retrieve them separately, as fixed points on a quasi-attractive manifold. Overall, this thesis attempts to contribute to the search for general principles underlying memory, intended as an emergent collective property of networks in the brain, based on self-organization, imperfections and irregularities

    Brain signal processing and neurological therapy

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Inimaju arvutuslikke protsesside mõistmine masinõpe mudelite tõlgendamise kaudu. Andmepõhine lähenemine arvutuslikku neuroteadusesse

    Get PDF
    Modelleerimine on inimkonna põline viis keerulistest nähtustest arusaamiseks. Planeetide liikumise mudel, gravitatsiooni mudel ja osakestefüüsika standardmudel on näited selle lähenemise edukusest. Neuroteaduses on olemas kaks viisi mudelite loomiseks: traditsiooniline hüpoteesipõhine lähenemine, mille puhul kõigepealt mudel sõnastatakse ja alles siis valideeritakse andmete peal; ja uuem andmepõhine lähenemine, mis toetub masinõpele, et sõnastada mudeleid automaatselt. Hüpoteesipõhine viis annab täieliku mõistmise sellest, kuidas mudel töötab, aga nõuab aega, kuna iga hüpotees peab olema sõnastatud ja valideeritud käsitsi. Andmepõhine lähenemine toetub ainult andmetele ja arvutuslikele ressurssidele mudelite otsimisel, aga ei seleta kuidas täpselt mudel jõuab oma tulemusteni. Me väidame, et neuroandmestike suur hulk ja nende mahu kiire kasv nõuab andmepõhise lähenemise laiemat kasutuselevõttu neuroteaduses, nihkes uurija rolli mudelite tööprintsiipide tõlgendamisele. Doktoritöö koosneb kolmest näitest neuroteaduse teadmisi avastamisest masinõppe tõlgendamismeetodeid kasutades. Esimeses uuringus tõlgendatava mudeli abiga me kirjeldame millised ajas muutuvad sageduskomponendid iseloomustavad inimese ajusignaali visuaalsete objektide tuvastamise ülesande puhul. Teises uuringus võrdleme omavahel signaale inimese aju ventraalses piirkonnas ja konvolutsiooniliste tehisnärvivõrkude aktivatsioone erinevates kihtides. Säärane võrdlus võimaldas meil kinnitada hüpoteesi, et mõlemad süsteemid kasutavad hierarhilist struktuuri. Viimane näide kasutab topoloogiat säilitavat mõõtmelisuse vähendamise ja visualiseerimise meetodit, et näha, millised ajusignaalid ja mõtteseisundid on üksteisele sarnased. Viimased tulemused masinõppes ja tehisintellektis näitasid et mõned mehhanismid meie ajus on sarnased mehhanismidega, milleni jõuavad õppimise käigus masinõppe algoritmid. Oma tööga me rõhutame masinõppe mudelite tõlgendamise tähtsust selliste mehhanismide avastamiseks.Building a model of a complex phenomenon is an ancient way of gaining knowledge and understanding of the reality around us. Models of planetary motion, gravity, particle physics are examples of this approach. In neuroscience, there are two ways of coming up with explanations of reality: a traditional hypothesis-driven approach, where a model is first formulated and then tested using the data, and a more recent data-driven approach, that relies on machine learning to generate models automatically. Hypothesis-driven approach provides full understanding of the model, but is time-consuming as each model has to be conceived and tested manually. Data-driven approach requires only the data and computational resources to sift through potential models, saving time, but leaving the resulting model itself to be a black box. Given the growing amount of neural data, we argue in favor of a more widespread adoption of the data-driven approach, reallocating part of the human effort from manual modeling. The thesis is based on three examples of how interpretation of machine-learned models leads to neuroscientific insights on three different levels of neural organization. Our first interpretable model is used to characterize neural dynamics of localized neural activity during the task of visual perceptual categorization. Next, we compare the activity of human visual system with the activity of a convolutional neural network, revealing explanations about the functional organization of human visual cortex. Lastly, we use dimensionality reduction and visualization techniques to understand relative organization of mental concepts within a subject's mental state space and apply it in the context of brain-computer interfaces. Recent results in neuroscience and AI show similarities between the mechanisms of both systems. This fact endorses the relevance of our approach: interpreting the mechanisms employed by machine learning models can shed light on the mechanisms employed by our brainhttps://www.ester.ee/record=b536057
    corecore