1,482 research outputs found

    Machine learning based brain signal decoding for intelligent adaptive deep brain stimulation

    Get PDF
    Sensing enabled implantable devices and next-generation neurotechnology allow real-time adjustments of invasive neuromodulation. The identification of symptom and disease-specific biomarkers in invasive brain signal recordings has inspired the idea of demand dependent adaptive deep brain stimulation (aDBS). Expanding the clinical utility of aDBS with machine learning may hold the potential for the next breakthrough in the therapeutic success of clinical brain computer interfaces. To this end, sophisticated machine learning algorithms optimized for decoding of brain states from neural time-series must be developed. To support this venture, this review summarizes the current state of machine learning studies for invasive neurophysiology. After a brief introduction to the machine learning terminology, the transformation of brain recordings into meaningful features for decoding of symptoms and behavior is described. Commonly used machine learning models are explained and analyzed from the perspective of utility for aDBS. This is followed by a critical review on good practices for training and testing to ensure conceptual and practical generalizability for real-time adaptation in clinical settings. Finally, first studies combining machine learning with aDBS are highlighted. This review takes a glimpse into the promising future of intelligent adaptive DBS (iDBS) and concludes by identifying four key ingredients on the road for successful clinical adoption: i) multidisciplinary research teams, ii) publicly available datasets, iii) open-source algorithmic solutions and iv) strong world-wide research collaborations.Fil: Merk, Timon. Charité – Universitätsmedizin Berlin; AlemaniaFil: Peterson, Victoria. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Matemática Aplicada del Litoral. Universidad Nacional del Litoral. Instituto de Matemática Aplicada del Litoral; Argentina. Harvard Medical School; Estados UnidosFil: Köhler, Richard. Charité – Universitätsmedizin Berlin; AlemaniaFil: Haufe, Stefan. Charité – Universitätsmedizin Berlin; AlemaniaFil: Richardson, R. Mark. Harvard Medical School; Estados UnidosFil: Neumann, Wolf Julian. Charité – Universitätsmedizin Berlin; Alemani

    The Berlin Brain-Computer Interface: Progress Beyond Communication and Control

    Get PDF
    The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world.EC/FP7/611570/EU/Symbiotic Mind Computer Interaction for Information Seeking/MindSeeEC/FP7/625991/EU/Hyperscanning 2.0 Analyses of Multimodal Neuroimaging Data: Concept, Methods and Applications/HYPERSCANNING 2.0DFG, 103586207, GRK 1589: Verarbeitung sensorischer Informationen in neuronalen Systeme

    Harmoni: A method for eliminating spurious interactions due to the harmonic components in neuronal data

    Get PDF
    Cross-frequency synchronization (CFS) has been proposed as a mechanism for integrating spatially and spectrally distributed information in the brain. However, investigating CFS in Magneto- and Electroencephalography (MEG/EEG) is hampered by the presence of spurious neuronal interactions due to the non-sinusoidal waveshape of brain oscillations. Such waveshape gives rise to the presence of oscillatory harmonics mimicking genuine neuronal oscillations. Until recently, however, there has been no methodology for removing these harmonics from neuronal data. In order to address this long-standing challenge, we introduce a novel method (called HARMOnic miNImization - Harmoni) that removes the signal components which can be harmonics of a non-sinusoidal signal. Harmoni’s working principle is based on the presence of CFS between harmonic components and the fundamental component of a non-sinusoidal signal. We extensively tested Harmoni in realistic EEG simulations. The simulated couplings between the source signals represented genuine and spurious CFS and within-frequency phase synchronization. Using diverse evaluation criteria, including ROC analyses, we showed that the within- and cross-frequency spurious interactions are suppressed significantly, while the genuine activities are not affected. Additionally, we applied Harmoni to real resting-state EEG data revealing intricate remote connectivity patterns which are usually masked by the spurious connections. Given the ubiquity of non-sinusoidal neuronal oscillations in electrophysiological recordings, Harmoni is expected to facilitate novel insights into genuine neuronal interactions in various research fields, and can also serve as a steppingstone towards the development of further signal processing methods aiming at refining within- and cross-frequency synchronization in electrophysiological recordings

    Simple acoustic features can explain phoneme-based predictions of cortical responses to speech

    Get PDF
    When we listen to speech, we have to make sense of a waveform of sound pressure. Hierarchical models of speech perception assume that, to extract semantic meaning, the signal is transformed into unknown, intermediate neuronal representations. Traditionally, studies of such intermediate representations are guided by linguistically defined concepts, such as phonemes. Here, we argue that in order to arrive at an unbiased understanding of the neuronal responses to speech, we should focus instead on representations obtained directly from the stimulus. We illustrate our view with a data-driven, information theoretic analysis of a dataset of 24 young, healthy humans who listened to a 1 h narrative while their magnetoencephalogram (MEG) was recorded. We find that two recent results, the improved performance of an encoding model in which annotated linguistic and acoustic features were combined and the decoding of phoneme subgroups from phoneme-locked responses, can be explained by an encoding model that is based entirely on acoustic features. These acoustic features capitalize on acoustic edges and outperform Gabor-filtered spectrograms, which can explicitly describe the spectrotemporal characteristics of individual phonemes. By replicating our results in publicly available electroencephalography (EEG) data, we conclude that models of brain responses based on linguistic features can serve as excellent benchmarks. However, we believe that in order to further our understanding of human cortical responses to speech, we should also explore low-level and parsimonious explanations for apparent high-level phenomena

    MEEGIPS—A modular EEG investigation and processing system for visual and automated detection of high frequency oscillations

    Get PDF
    High frequency oscillations (HFOs) are electroencephalographic correlates of brain activity detectable in a frequency range above 80 Hz. They co-occur with physiological processes such as saccades, movement execution, and memory formation, but are also related to pathological processes in patients with epilepsy. Localization of the seizure onset zone, and, more specifically, of the to-be resected area in patients with refractory epilepsy seems to be supported by the detection of HFOs. The visual identification of HFOs is very time consuming with approximately 8 h for 10 min and 20 channels. Therefore, automated detection of HFOs is highly warranted. So far, no software for visual marking or automated detection of HFOs meets the needs of everyday clinical practice and research. In the context of the currently available tools and for the purpose of related local HFO study activities we aimed at converging the advantages of clinical and experimental systems by designing and developing a comprehensive and extensible software framework for HFO analysis that, on the one hand, focuses on the requirements of clinical application and, on the other hand, facilitates the integration of experimental code and algorithms. The development project included the definition of use cases, specification of requirements, software design, implementation, and integration. The work comprised the engineering of component-specific requirements, component design, as well as component- and integration-tests. A functional and tested software package is the deliverable of this activity. The project MEEGIPS, a Modular EEG Investigation and Processing System for visual and automated detection of HFOs, introduces a highly user friendly software that includes five of the most prominent automated detection algorithms. Future evaluation of these, as well as implementation of further algorithms is facilitated by the modular software architecture.This work was supported by the Austrian Science Fund (FWF): KLI 657-B31 and by the PMU-FFF: A-18/01/029-HĂ–L.Peer reviewe

    A bimodal deep learning architecture for EEGfNIRS decoding of overt and imagined speech

    Get PDF

    Novel computational methods for in vitro and in situ cryo-electron microscopy

    Get PDF
    Over the past decade, advances in microscope hardware and image data processing algorithms have made cryo-electron microscopy (cryo-EM) a dominant technique for protein structure determination. Near-atomic resolution can now be obtained for many challenging in vitro samples using single-particle analysis (SPA), while sub-tomogram averaging (STA) can obtain sub-nanometer resolution for large protein complexes in a crowded cellular environment. Reaching high resolution requires large amounts of im-age data. Modern transmission electron microscopes (TEMs) automate the acquisition process and can acquire thousands of micrographs or hundreds of tomographic tilt se-ries over several days without intervention. In a first step, the data must be pre-processed: Micrographs acquired as movies are cor-rected for stage and beam-induced motion. For tilt series, additional alignment of all micrographs in 3D is performed using gold- or patch-based fiducials. Parameters of the contrast-transfer function (CTF) are estimated to enable its reversal during SPA refine-ment. Finally, individual protein particles must be located and extracted from the aligned micrographs. Current pre-processing algorithms, especially those for particle picking, are not robust enough to enable fully unsupervised operation. Thus, pre-processing is start-ed after data collection, and takes several days due to the amount of supervision re-quired. Pre-processing the data in parallel to acquisition with more robust algorithms would save time and allow to discover bad samples and microscope settings early on. Warp is a new software for cryo-EM data pre-processing. It implements new algorithms for motion correction, CTF estimation, tomogram reconstruction, as well as deep learn-ing-based approaches to particle picking and image denoising. The algorithms are more accurate and robust, enabling unsupervised operation. Warp integrates all pre-processing steps into a pipeline that is executed on-the-fly during data collection. Inte-grated with SPA tools, the pipeline can produce 2D and 3D classes less than an hour into data collection for favorable samples. Here I describe the implementation of the new algorithms, and evaluate them on various movie and tilt series data sets. I show that un-supervised pre-processing of a tilted influenza hemagglutinin trimer sample with Warp and refinement in cryoSPARC can improve previously published resolution from 3.9 Å to 3.2 Å. Warp’s algorithms operate in a reference-free manner to improve the image resolution at the pre-processing stage when no high-resolution maps are available for the particles yet. Once 3D maps have been refined, they can be used to go back to the raw data and perform reference-based refinement of sample motion and CTF in movies and tilt series. M is a new tool I developed to solve this task in a multi-particle framework. Instead of following the SPA assumption that every particle is single and independent, M models all particles in a field of view as parts of a large, physically connected multi-particle system. This allows M to optimize hyper-parameters of the system, such as sample motion and deformation, or higher-order aberrations in the CTF. Because M models these effects accurately and optimizes all hyper-parameters simultaneously with particle alignments, it can surpass previous reference-based frame and tilt series alignment tools. Here I de-scribe the implementation of M, evaluate it on several data sets, and demonstrate that the new algorithms achieve equally high resolution with movie and tilt series data of the same sample. Most strikingly, the combination of Warp, RELION and M can resolve 70S ribosomes bound to an antibiotic at 3.5 Å inside vitrified Mycoplasma pneumoniae cells, marking a major advance in resolution for in situ imaging

    All-optical spiking neurons integrated on a photonic chip

    Get PDF
    • …
    corecore