591 research outputs found

    Brain-Machine Interfaces: A Tale of Two Learners

    Get PDF
    Brain-machine interface (BMI) technology has rapidly matured over the last two decades, mainly thanks to the introduction of artificial intelligence (AI) methods, in particular, machine-learning algorithms. Yet, the need for subjects to learn to modulate their brain activity is a key component of successful BMI control. Blending machine and subject learning, or mutual learning, is widely acknowledged in the BMI field. Nevertheless, we posit that current research trends are heavily biased toward the machine-learning side of BMI training. In this article, we take a critical view of the relevant literature, and our own previous work, to identify the key issues for more effective mutual-learning schemes in translational BMIs that are specifically tailored to promote subject learning. We identify the main caveats in the literature on subject learning in BMI, in particular, the lack of longitudinal studies involving end users and shortcomings in quantifying subject learning, and pinpoint critical improvements for future experimental designs

    Artificial Intelligence for Noninvasive Fetal Electrocardiogram Analysis

    Get PDF

    The Cybathlon BCI race: Successful longitudinal mutual learning with two tetraplegic users

    Get PDF
    This work aims at corroborating the importance and efficacy of mutual learning in motor imagery (MI) brain–computer interface (BCI) by leveraging the insights obtained through our participation in the BCI race of the Cybathlon event. We hypothesized that, contrary to the popular trend of focusing mostly on the machine learning aspects of MI BCI training, a comprehensive mutual learning methodology that reinstates the three learning pillars (at the machine, subject, and application level) as equally significant could lead to a BCI–user symbiotic system able to succeed in real-world scenarios such as the Cybathlon event. Two severely impaired participants with chronic spinal cord injury (SCI), were trained following our mutual learning approach to control their avatar in a virtual BCI race game. The competition outcomes substantiate the effectiveness of this type of training. Most importantly, the present study is one among very few to provide multifaceted evidence on the efficacy of subject learning during BCI training. Learning correlates could be derived at all levels of the interface—application, BCI output, and electroencephalography (EEG) neuroimaging—with two end-users, sufficiently longitudinal evaluation, and, importantly, under real-world and even adverse conditions

    Multi-Scale Architectures for Human Pose Estimation

    Get PDF
    In this dissertation we present multiple state-of-the-art deep learning methods for computer vision tasks using multi-scale approaches for two main tasks: pose estimation and semantic segmentation. For pose estimation, we introduce a complete framework expanding the fields-of-view of the network through a multi-scale approach, resulting in a significant increasing the effectiveness of conventional backbone architectures, for several pose estimation tasks without requiring a larger network or postprocessing. Our multi-scale pose estimation framework contributes to research on methods for single-person pose estimation in both 2D and 3D scenarios, pose estimation in videos, and the estimation of multiple people’s pose in a single image for both top-down and bottom-up approaches. In addition to the enhanced capability of multi-person pose estimation generated by our multi-scale approach, our framework also demonstrates a superior capacity to expanded the more detailed and heavier task of full-body pose estimation, including up to 133 joints per person. For segmentation, we present a new efficient architecture for semantic segmentation, based on a “Waterfall” Atrous Spatial Pooling architecture, that achieves a considerable accuracy increase while decreasing the number of network parameters and memory footprint. The proposed Waterfall architecture leverages the efficiency of progressive filtering in the cascade architecture while maintaining multi-scale fields-of-view comparable to spatial pyramid configurations. Additionally, our method does not rely on a postprocessing stage with conditional random fields, which further reduces complexity and required training time

    Combining Shape and Learning for Medical Image Analysis

    Get PDF
    Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields

    Breaking Down the Barriers To Operator Workload Estimation: Advancing Algorithmic Handling of Temporal Non-Stationarity and Cross-Participant Differences for EEG Analysis Using Deep Learning

    Get PDF
    This research focuses on two barriers to using EEG data for workload assessment: day-to-day variability, and cross- participant applicability. Several signal processing techniques and deep learning approaches are evaluated in multi-task environments. These methods account for temporal, spatial, and frequential data dependencies. Variance of frequency- domain power distributions for cross-day workload classification is statistically significant. Skewness and kurtosis are not significant in an environment absent workload transitions, but are salient with transitions present. LSTMs improve day- to-day feature stationarity, decreasing error by 59% compared to previous best results. A multi-path convolutional recurrent model using bi-directional, residual recurrent layers significantly increases predictive accuracy and decreases cross-participant variance. Deep learning regression approaches are applied to a multi-task environment with workload transitions. Accounting for temporal dependence significantly reduces error and increases correlation compared to baselines. Visualization techniques for LSTM feature saliency are developed to understand EEG analysis model biases
    • …
    corecore