258 research outputs found

    ResOT: Resource-Efficient Oblique Trees for Neural Signal Classification

    Full text link
    Classifiers that can be implemented on chip with minimal computational and memory resources are essential for edge computing in emerging applications such as medical and IoT devices. This paper introduces a machine learning model based on oblique decision trees to enable resource-efficient classification on a neural implant. By integrating model compression with probabilistic routing and implementing cost-aware learning, our proposed model could significantly reduce the memory and hardware cost compared to state-of-the-art models, while maintaining the classification accuracy. We trained the resource-efficient oblique tree with power-efficient regularization (ResOT-PE) on three neural classification tasks to evaluate the performance, memory, and hardware requirements. On seizure detection task, we were able to reduce the model size by 3.4X and the feature extraction cost by 14.6X compared to the ensemble of boosted trees, using the intracranial EEG from 10 epilepsy patients. In a second experiment, we tested the ResOT-PE model on tremor detection for Parkinson's disease, using the local field potentials from 12 patients implanted with a deep-brain stimulation (DBS) device. We achieved a comparable classification performance as the state-of-the-art boosted tree ensemble, while reducing the model size and feature extraction cost by 10.6X and 6.8X, respectively. We also tested on a 6-class finger movement detection task using ECoG recordings from 9 subjects, reducing the model size by 17.6X and feature computation cost by 5.1X. The proposed model can enable a low-power and memory-efficient implementation of classifiers for real-time neurological disease detection and motor decoding

    Machine learning based brain signal decoding for intelligent adaptive deep brain stimulation

    Get PDF
    Sensing enabled implantable devices and next-generation neurotechnology allow real-time adjustments of invasive neuromodulation. The identification of symptom and disease-specific biomarkers in invasive brain signal recordings has inspired the idea of demand dependent adaptive deep brain stimulation (aDBS). Expanding the clinical utility of aDBS with machine learning may hold the potential for the next breakthrough in the therapeutic success of clinical brain computer interfaces. To this end, sophisticated machine learning algorithms optimized for decoding of brain states from neural time-series must be developed. To support this venture, this review summarizes the current state of machine learning studies for invasive neurophysiology. After a brief introduction to the machine learning terminology, the transformation of brain recordings into meaningful features for decoding of symptoms and behavior is described. Commonly used machine learning models are explained and analyzed from the perspective of utility for aDBS. This is followed by a critical review on good practices for training and testing to ensure conceptual and practical generalizability for real-time adaptation in clinical settings. Finally, first studies combining machine learning with aDBS are highlighted. This review takes a glimpse into the promising future of intelligent adaptive DBS (iDBS) and concludes by identifying four key ingredients on the road for successful clinical adoption: i) multidisciplinary research teams, ii) publicly available datasets, iii) open-source algorithmic solutions and iv) strong world-wide research collaborations.Fil: Merk, Timon. Charité – Universitätsmedizin Berlin; AlemaniaFil: Peterson, Victoria. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Matemática Aplicada del Litoral. Universidad Nacional del Litoral. Instituto de Matemática Aplicada del Litoral; Argentina. Harvard Medical School; Estados UnidosFil: Köhler, Richard. Charité – Universitätsmedizin Berlin; AlemaniaFil: Haufe, Stefan. Charité – Universitätsmedizin Berlin; AlemaniaFil: Richardson, R. Mark. Harvard Medical School; Estados UnidosFil: Neumann, Wolf Julian. Charité – Universitätsmedizin Berlin; Alemani

    Proof of Concept of an Online EMG-Based Decoding of Hand Postures and Individual Digit Forces for Prosthetic Hand Control

    Get PDF
    abstract: Introduction: Options currently available to individuals with upper limb loss range from prosthetic hands that can perform many movements, but require more cognitive effort to control, to simpler terminal devices with limited functional abilities. We attempted to address this issue by designing a myoelectric control system to modulate prosthetic hand posture and digit force distribution. Methods: We recorded surface electromyographic (EMG) signals from five forearm muscles in eight able-bodied subjects while they modulated hand posture and the flexion force distribution of individual fingers. We used a support vector machine (SVM) and a random forest regression (RFR) to map EMG signal features to hand posture and individual digit forces, respectively. After training, subjects performed grasping tasks and hand gestures while a computer program computed and displayed online feedback of all digit forces, in which digits were flexed, and the magnitude of contact forces. We also used a commercially available prosthetic hand, the i-Limb (Touch Bionics), to provide a practical demonstration of the proposed approach’s ability to control hand posture and finger forces. Results: Subjects could control hand pose and force distribution across the fingers during online testing. Decoding success rates ranged from 60% (index finger pointing) to 83–99% for 2-digit grasp and resting state, respectively. Subjects could also modulate finger force distribution. Discussion: This work provides a proof of concept for the application of SVM and RFR for online control of hand posture and finger force distribution, respectively. Our approach has potential applications for enabling in-hand manipulation with a prosthetic hand.View the article as published at http://journal.frontiersin.org/article/10.3389/fneur.2017.00007/ful

    Across-subjects classification of stimulus modality from human MEG high frequency activity

    Get PDF
    Single-trial analyses have the potential to uncover meaningful brain dynamics that are obscured when averaging across trials. However, low signal-to-noise ratio (SNR) can impede the use of single-trial analyses and decoding methods. In this study, we investigate the applicability of a single-trial approach to decode stimulus modality from magnetoencephalographic (MEG) high frequency activity. In order to classify the auditory versus visual presentation of words, we combine beamformer source reconstruction with the random forest classification method. To enable group level inference, the classification is embedded in an across-subjects framework. We show that single-trial gamma SNR allows for good classification performance (accuracy across subjects: 66.44%). This implies that the characteristics of high frequency activity have a high consistency across trials and subjects. The random forest classifier assigned informational value to activity in both auditory and visual cortex with high spatial specificity. Across time, gamma power was most informative during stimulus presentation. Among all frequency bands, the 75 Hz to 95 Hz band was the most informative frequency band in visual as well as in auditory areas. Especially in visual areas, a broad range of gamma frequencies (55 Hz to 125 Hz) contributed to the successful classification. Thus, we demonstrate the feasibility of single-trial approaches for decoding the stimulus modality across subjects from high frequency activity and describe the discriminative gamma activity in time, frequency, and space

    Advanced Sensing and Image Processing Techniques for Healthcare Applications

    Get PDF
    This Special Issue aims to attract the latest research and findings in the design, development and experimentation of healthcare-related technologies. This includes, but is not limited to, using novel sensing, imaging, data processing, machine learning, and artificially intelligent devices and algorithms to assist/monitor the elderly, patients, and the disabled population

    Towards electrodeless EMG linear envelope signal recording for myo-activated prostheses control

    Get PDF
    After amputation, the residual muscles of the limb may function in a normal way, enabling the electromyogram (EMG) signals recorded from them to be used to drive a replacement limb. These replacement limbs are called myoelectric prosthesis. The prostheses that use EMG have always been the first choice for both clinicians and engineers. Unfortunately, due to the many drawbacks of EMG (e.g. skin preparation, electromagnetic interferences, high sample rate, etc.); researchers have aspired to find suitable alternatives. One proposes the dry-contact, low-cost sensor based on a force-sensitive resistor (FSR) as a valid alternative which instead of detecting electrical events, detects mechanical events of muscle. FSR sensor is placed on the skin through a hard, circular base to sense the muscle contraction and to acquire the signal. Similarly, to reduce the output drift (resistance) caused by FSR edges (creep) and to maintain the FSR sensitivity over a wide input force range, signal conditioning (Voltage output proportional to force) is implemented. This FSR signal acquired using FSR sensor can be used directly to replace the EMG linear envelope (an important control signal in prosthetics applications). To find the best FSR position(s) to replace a single EMG lead, the simultaneous recording of EMG and FSR output is performed. Three FSRs are placed directly over the EMG electrodes, in the middle of the targeted muscle and then the individual (FSR1, FSR2 and FSR3) and combination of FSR (e.g. FSR1+FSR2, FSR2-FSR3) is evaluated. The experiment is performed on a small sample of five volunteer subjects. The result shows a high correlation (up to 0.94) between FSR output and EMG linear envelope. Consequently, the usage of the best FSR sensor position shows the ability of electrode less FSR-LE to proportionally control the prosthesis (3-D claw). Furthermore, FSR can be used to develop a universal programmable muscle signal sensor that can be suitable to control the myo-activated prosthesis

    Wearable in-ear pulse oximetry: theory and applications

    Get PDF
    Wearable health technology, most commonly in the form of the smart watch, is employed by millions of users worldwide. These devices generally exploit photoplethysmography (PPG), the non-invasive use of light to measure blood volume, in order to track physiological metrics such as pulse and respiration. Moreover, PPG is commonly used in hospitals in the form of pulse oximetry, which measures light absorbance by the blood at different wavelengths of light to estimate blood oxygen levels (SpO2). This thesis aims to demonstrate that despite its widespread usage over many decades, this sensor still possesses a wealth of untapped value. Through a combination of advanced signal processing and harnessing the ear as a location for wearable sensing, this thesis introduces several novel high impact applications of in-ear pulse oximetry and photoplethysmography. The aims of this thesis are accomplished through a three pronged approach: rapid detection of hypoxia, tracking of cognitive workload and fatigue, and detection of respiratory disease. By means of the simultaneous recording of in-ear and finger pulse oximetry at rest and during breath hold tests, it was found that in-ear SpO2 responds on average 12.4 seconds faster than the finger SpO2. This is likely due in part to the ear being in close proximity to the brain, making it a priority for oxygenation and thus making wearable in-ear SpO2 a good proxy for core blood oxygen. Next, the low latency of in-ear SpO2 was further exploited in the novel application of classifying cognitive workload. It was found that in-ear pulse oximetry was able to robustly detect tiny decreases in blood oxygen during increased cognitive workload, likely caused by increased brain metabolism. This thesis demonstrates that in-ear SpO2 can be used to accurately distinguish between different levels of an N-back memory task, representing different levels of mental effort. This concept was further validated through its application to gaming and then extended to the detection of driver related fatigue. It was found that features derived from SpO2 and PPG were predictive of absolute steering wheel angle, which acts as a proxy for fatigue. The strength of in-ear PPG for the monitoring of respiration was investigated with respect to the finger, with the conclusion that in-ear PPG exhibits far stronger respiration induced intensity variations and pulse amplitude variations than the finger. All three respiratory modes were harnessed through multivariate empirical mode decomposition (MEMD) to produce spirometry-like respiratory waveforms from PPG. It was discovered that these PPG derived respiratory waveforms can be used to detect obstruction to breathing, both through a novel apparatus for the simulation of breathing disorders and through the classification of chronic obstructive pulmonary disease (COPD) in the real world. This thesis establishes in-ear pulse oximetry as a wearable technology with the potential for immense societal impact, with applications from the classification of cognitive workload and the prediction of driver fatigue, through to the detection of chronic obstructive pulmonary disease. The experiments and analysis in this thesis conclusively demonstrate that widely used pulse oximetry and photoplethysmography possess a wealth of untapped value, in essence teaching the old PPG sensor new tricks.Open Acces

    Neuronal Oscillations in Various Frequency Bands Differ between Pain and Touch

    Get PDF
    Although humans are generally capable of distinguishing single events of pain or touch, recent research suggested that both modalities activate a network of similar brain regions. By contrast, less attention has been paid to which processes uniquely contribute to each modality. The present study investigated the neuronal oscillations that enable a subject to process pain and touch as well as to evaluate the intensity of both modalities by means of Electroencephalography. Nineteen healthy subjects were asked to rate the intensity of each stimulus at single trial level. By computing Linear mixed effects models (LME) encoding of both modalities was explored by relating stimulus intensities to brain responses. While the intensity of single touch trials is encoded only by theta activity, pain perception is encoded by theta, alpha and gamma activity. Beta activity in the tactile domain shows an on/off like characteristic in response to touch which was not observed in the pain domain. Our results enhance recent findings pointing to the contribution of different neuronal oscillations to the processing of nociceptive and tactile stimuli
    • …
    corecore