1,962 research outputs found

    Enhancing Motor Imagery Decoding in Brain Computer Interfaces using Riemann Tangent Space Mapping and Cross Frequency Coupling

    Full text link
    Objective: Motor Imagery (MI) serves as a crucial experimental paradigm within the realm of Brain Computer Interfaces (BCIs), aiming to decoding motor intentions from electroencephalogram (EEG) signals. Method: Drawing inspiration from Riemannian geometry and Cross-Frequency Coupling (CFC), this paper introduces a novel approach termed Riemann Tangent Space Mapping using Dichotomous Filter Bank with Convolutional Neural Network (DFBRTS) to enhance the representation quality and decoding capability pertaining to MI features. DFBRTS first initiates the process by meticulously filtering EEG signals through a Dichotomous Filter Bank, structured in the fashion of a complete binary tree. Subsequently, it employs Riemann Tangent Space Mapping to extract salient EEG signal features within each sub-band. Finally, a lightweight convolutional neural network is employed for further feature extraction and classification, operating under the joint supervision of cross-entropy and center loss. To validate the efficacy, extensive experiments were conducted using DFBRTS on two well-established benchmark datasets: the BCI competition IV 2a (BCIC-IV-2a) dataset and the OpenBMI dataset. The performance of DFBRTS was benchmarked against several state-of-the-art MI decoding methods, alongside other Riemannian geometry-based MI decoding approaches. Results: DFBRTS significantly outperforms other MI decoding algorithms on both datasets, achieving a remarkable classification accuracy of 78.16% for four-class and 71.58% for two-class hold-out classification, as compared to the existing benchmarks.Comment: 22 pages, 7 figure

    A method to enhance the deep learning in an aerial image

    Full text link
    © 2017 IEEE. In this paper, we propose a kind of pre-processing method which can be applied to the depth learning method for the characteristics of aerial image. This method combines the color and spatial information to do the quick background filtering. In addition to increase execution speed, but also to reduce the rate of false positives

    Analysis of Human Gait Using Hybrid EEG-fNIRS-Based BCI System: A Review

    Get PDF
    Human gait is a complex activity that requires high coordination between the central nervous system, the limb, and the musculoskeletal system. More research is needed to understand the latter coordination\u27s complexity in designing better and more effective rehabilitation strategies for gait disorders. Electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) are among the most used technologies for monitoring brain activities due to portability, non-invasiveness, and relatively low cost compared to others. Fusing EEG and fNIRS is a well-known and established methodology proven to enhance brain–computer interface (BCI) performance in terms of classification accuracy, number of control commands, and response time. Although there has been significant research exploring hybrid BCI (hBCI) involving both EEG and fNIRS for different types of tasks and human activities, human gait remains still underinvestigated. In this article, we aim to shed light on the recent development in the analysis of human gait using a hybrid EEG-fNIRS-based BCI system. The current review has followed guidelines of preferred reporting items for systematic reviews and meta-Analyses (PRISMA) during the data collection and selection phase. In this review, we put a particular focus on the commonly used signal processing and machine learning algorithms, as well as survey the potential applications of gait analysis. We distill some of the critical findings of this survey as follows. First, hardware specifications and experimental paradigms should be carefully considered because of their direct impact on the quality of gait assessment. Second, since both modalities, EEG and fNIRS, are sensitive to motion artifacts, instrumental, and physiological noises, there is a quest for more robust and sophisticated signal processing algorithms. Third, hybrid temporal and spatial features, obtained by virtue of fusing EEG and fNIRS and associated with cortical activation, can help better identify the correlation between brain activation and gait. In conclusion, hBCI (EEG + fNIRS) system is not yet much explored for the lower limb due to its complexity compared to the higher limb. Existing BCI systems for gait monitoring tend to only focus on one modality. We foresee a vast potential in adopting hBCI in gait analysis. Imminent technical breakthroughs are expected using hybrid EEG-fNIRS-based BCI for gait to control assistive devices and Monitor neuro-plasticity in neuro-rehabilitation. However, although those hybrid systems perform well in a controlled experimental environment when it comes to adopting them as a certified medical device in real-life clinical applications, there is still a long way to go

    Data-driven multivariate and multiscale methods for brain computer interface

    Get PDF
    This thesis focuses on the development of data-driven multivariate and multiscale methods for brain computer interface (BCI) systems. The electroencephalogram (EEG), the most convenient means to measure neurophysiological activity due to its noninvasive nature, is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its multichannel recording nature require a new set of data-driven multivariate techniques to estimate more accurately features for enhanced BCI operation. Also, a long term goal is to enable an alternative EEG recording strategy for achieving long-term and portable monitoring. Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary EEG signal into a set of components which are highly localised in time and frequency. It is shown that the complex and multivariate extensions of EMD, which can exploit common oscillatory modes within multivariate (multichannel) data, can be used to accurately estimate and compare the amplitude and phase information among multiple sources, a key for the feature extraction of BCI system. A complex extension of local mean decomposition is also introduced and its operation is illustrated on two channel neuronal spike streams. Common spatial pattern (CSP), a standard feature extraction technique for BCI application, is also extended to complex domain using the augmented complex statistics. Depending on the circularity/noncircularity of a complex signal, one of the complex CSP algorithms can be chosen to produce the best classification performance between two different EEG classes. Using these complex and multivariate algorithms, two cognitive brain studies are investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user attention to a sound source among a mixture of sound stimuli, which is aimed at improving the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments elicited by taste and taste recall are examined to determine the pleasure and displeasure of a food for the implementation of affective computing. The separation between two emotional responses is examined using real and complex-valued common spatial pattern methods. Finally, we introduce a novel approach to brain monitoring based on EEG recordings from within the ear canal, embedded on a custom made hearing aid earplug. The new platform promises the possibility of both short- and long-term continuous use for standard brain monitoring and interfacing applications

    Non-Destructive Evaluation for Composite Material

    Get PDF
    The Nondestructive Evaluation Sciences Branch (NESB) at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) has conducted impact damage experiments over the past few years with the goal of understanding structural defects in composite materials. The Data Science Team within the NASA LaRC Office of the Chief Information Officer (OCIO) has been working with the Non-Destructive Evaluation (NDE) subject matter experts (SMEs), Dr. Cheryl Rose, from the Structural Mechanics & Concepts Branch and Dr. William Winfree, from the Research Directorate, to develop computer vision solutions using digital image processing and machine learning techniques that can help identify the structural defects in composite materials. The research focused on developing an autonomous Non-Destructive Evaluation system which detects, identifies, and characterizes crack and delamination in composite materials from computed tomography (CT scans) images. The identification and visualization of cracking and delamination will allow researchers to use volumetric models to better understand the propagation of damage in materials, leading to design optimizations that will prevent catastrophic failure

    Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis

    Get PDF
    Background and Objectives: This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. Methods: In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. Results: It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Sup- port Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). Conclusions: It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society

    Inferring Geodesic Cerebrovascular Graphs: Image Processing, Topological Alignment and Biomarkers Extraction

    Get PDF
    A vectorial representation of the vascular network that embodies quantitative features - location, direction, scale, and bifurcations - has many potential neuro-vascular applications. Patient-specific models support computer-assisted surgical procedures in neurovascular interventions, while analyses on multiple subjects are essential for group-level studies on which clinical prediction and therapeutic inference ultimately depend. This first motivated the development of a variety of methods to segment the cerebrovascular system. Nonetheless, a number of limitations, ranging from data-driven inhomogeneities, the anatomical intra- and inter-subject variability, the lack of exhaustive ground-truth, the need for operator-dependent processing pipelines, and the highly non-linear vascular domain, still make the automatic inference of the cerebrovascular topology an open problem. In this thesis, brain vessels’ topology is inferred by focusing on their connectedness. With a novel framework, the brain vasculature is recovered from 3D angiographies by solving a connectivity-optimised anisotropic level-set over a voxel-wise tensor field representing the orientation of the underlying vasculature. Assuming vessels joining by minimal paths, a connectivity paradigm is formulated to automatically determine the vascular topology as an over-connected geodesic graph. Ultimately, deep-brain vascular structures are extracted with geodesic minimum spanning trees. The inferred topologies are then aligned with similar ones for labelling and propagating information over a non-linear vectorial domain, where the branching pattern of a set of vessels transcends a subject-specific quantized grid. Using a multi-source embedding of a vascular graph, the pairwise registration of topologies is performed with the state-of-the-art graph matching techniques employed in computer vision. Functional biomarkers are determined over the neurovascular graphs with two complementary approaches. Efficient approximations of blood flow and pressure drop account for autoregulation and compensation mechanisms in the whole network in presence of perturbations, using lumped-parameters analog-equivalents from clinical angiographies. Also, a localised NURBS-based parametrisation of bifurcations is introduced to model fluid-solid interactions by means of hemodynamic simulations using an isogeometric analysis framework, where both geometry and solution profile at the interface share the same homogeneous domain. Experimental results on synthetic and clinical angiographies validated the proposed formulations. Perspectives and future works are discussed for the group-wise alignment of cerebrovascular topologies over a population, towards defining cerebrovascular atlases, and for further topological optimisation strategies and risk prediction models for therapeutic inference. Most of the algorithms presented in this work are available as part of the open-source package VTrails

    Machine learning-based automated segmentation with a feedback loop for 3D synchrotron micro-CT

    Get PDF
    Die Entwicklung von Synchrotronlichtquellen der dritten Generation hat die Grundlage für die Untersuchung der 3D-Struktur opaker Proben mit einer Auflösung im Mikrometerbereich und höher geschaffen. Dies führte zur Entwicklung der Röntgen-Synchrotron-Mikro-Computertomographie, welche die Schaffung von Bildgebungseinrichtungen zur Untersuchung von Proben verschiedenster Art förderte, z.B. von Modellorganismen, um die Physiologie komplexer lebender Systeme besser zu verstehen. Die Entwicklung moderner Steuerungssysteme und Robotik ermöglichte die vollständige Automatisierung der Röntgenbildgebungsexperimente und die Kalibrierung der Parameter des Versuchsaufbaus während des Betriebs. Die Weiterentwicklung der digitalen Detektorsysteme führte zu Verbesserungen der Auflösung, des Dynamikbereichs, der Empfindlichkeit und anderer wesentlicher Eigenschaften. Diese Verbesserungen führten zu einer beträchtlichen Steigerung des Durchsatzes des Bildgebungsprozesses, aber auf der anderen Seite begannen die Experimente eine wesentlich größere Datenmenge von bis zu Dutzenden von Terabyte zu generieren, welche anschließend manuell verarbeitet wurden. Somit ebneten diese technischen Fortschritte den Weg für die Durchführung effizienterer Hochdurchsatzexperimente zur Untersuchung einer großen Anzahl von Proben, welche Datensätze von besserer Qualität produzierten. In der wissenschaftlichen Gemeinschaft besteht daher ein hoher Bedarf an einem effizienten, automatisierten Workflow für die Röntgendatenanalyse, welcher eine solche Datenlast bewältigen und wertvolle Erkenntnisse für die Fachexperten liefern kann. Die bestehenden Lösungen für einen solchen Workflow sind nicht direkt auf Hochdurchsatzexperimente anwendbar, da sie für Ad-hoc-Szenarien im Bereich der medizinischen Bildgebung entwickelt wurden. Daher sind sie nicht für Hochdurchsatzdatenströme optimiert und auch nicht in der Lage, die hierarchische Beschaffenheit von Proben zu nutzen. Die wichtigsten Beiträge der vorliegenden Arbeit sind ein neuer automatisierter Analyse-Workflow, der für die effiziente Verarbeitung heterogener Röntgendatensätze hierarchischer Natur geeignet ist. Der entwickelte Workflow basiert auf verbesserten Methoden zur Datenvorverarbeitung, Registrierung, Lokalisierung und Segmentierung. Jede Phase eines Arbeitsablaufs, die eine Trainingsphase beinhaltet, kann automatisch feinabgestimmt werden, um die besten Hyperparameter für den spezifischen Datensatz zu finden. Für die Analyse von Faserstrukturen in Proben wurde eine neue, hochgradig parallelisierbare 3D-Orientierungsanalysemethode entwickelt, die auf einem neuartigen Konzept der emittierenden Strahlen basiert und eine präzisere morphologische Analyse ermöglicht. Alle entwickelten Methoden wurden gründlich an synthetischen Datensätzen validiert, um ihre Anwendbarkeit unter verschiedenen Abbildungsbedingungen quantitativ zu bewerten. Es wurde gezeigt, dass der Workflow in der Lage ist, eine Reihe von Datensätzen ähnlicher Art zu verarbeiten. Darüber hinaus werden die effizienten CPU/GPU-Implementierungen des entwickelten Workflows und der Methoden vorgestellt und der Gemeinschaft als Module für die Sprache Python zur Verfügung gestellt. Der entwickelte automatisierte Analyse-Workflow wurde erfolgreich für Mikro-CT-Datensätze angewandt, die in Hochdurchsatzröntgenexperimenten im Bereich der Entwicklungsbiologie und Materialwissenschaft gewonnen wurden. Insbesondere wurde dieser Arbeitsablauf für die Analyse der Medaka-Fisch-Datensätze angewandt, was eine automatisierte Segmentierung und anschließende morphologische Analyse von Gehirn, Leber, Kopfnephronen und Herz ermöglichte. Darüber hinaus wurde die entwickelte Methode der 3D-Orientierungsanalyse bei der morphologischen Analyse von Polymergerüst-Datensätzen eingesetzt, um einen Herstellungsprozess in Richtung wünschenswerter Eigenschaften zu lenken

    An electromagnetic imaging system for metallic object detection and classification

    Get PDF
    PhD ThesisElectromagnetic imaging currently plays a vital role in various disciplines, from engineering to medical applications and is based upon the characteristics of electromagnetic fields and their interaction with the properties of materials. The detection and characterisation of metallic objects which pose a threat to safety is of great interest in relation to public and homeland security worldwide. Inspections are conducted under the prerequisite that is divested of all metallic objects. These inspection conditions are problematic in terms of the disruption of the movement of people and produce a soft target for terrorist attack. Thus, there is a need for a new generation of detection systems and information technologies which can provide an enhanced characterisation and discrimination capabilities. This thesis proposes an automatic metallic object detection and classification system. Two related topics have been addressed: to design and implement a new metallic object detection system; and to develop an appropriate signal processing algorithm to classify the targeted signatures. The new detection system uses an array of sensors in conjunction with pulsed excitation. The contributions of this research can be summarised as follows: (1) investigating the possibility of using magneto-resistance sensors for metallic object detection; (2) evaluating the proposed system by generating a database consisting of 12 real handguns with more than 20 objects used in daily life; (3) extracted features from the system outcomes using four feature categories referring to the objects’ shape, material composition, time-frequency signal analysis and transient pulse response; and (4) applying two classification methods to classify the objects into threats and non-threats, giving a successful classification rate of more than 92% using the feature combination and classification framework of the new system. The study concludes that novel magnetic field imaging system and their signal outputs can be used to detect, identify and classify metallic objects. In comparison with conventional induction-based walk-through metal detectors, the magneto-resistance sensor array-based system shows great potential for object identification and discrimination. This novel system design and signal processing achievement may be able to produce significant improvements in automatic threat object detection and classification applications.Iraqi Cultural Attaché, Londo

    Optimizing Common Spatial Pattern for a Motor Imagerybased BCI by Eigenvector Filteration

    Get PDF
    One of the fundamental criterion for the successful application of a brain-computer interface (BCI) system is to extract significant features that confine invariant characteristics specific to each brain state. Distinct features play an important role in enabling a computer to associate different electroencephalogram (EEG) signals to different brain states. To ease the workload on the feature extractor and enhance separability between different brain states, the data is often transformed or filtered to maximize separability before feature extraction. The common spatial patterns (CSP) approach can achieve this by linearly projecting the multichannel EEG data into a surrogate data space by the weighted summation of the appropriate channels. However, choosing the optimal spatial filters is very significant in the projection of the data and this has a direct impact on classification. This paper presents an optimized pattern selection method from the CSP filter for improved classification accuracy. Based on the hypothesis that values closer to zero in the CSP filter introduce noise rather than useful information, the CSP filter is modified by analyzing the CSP filter and removing/filtering the degradative or insignificant values from the filter. This hypothesis is tested by comparing the BCI results of eight subjects using the conventional CSP filters and the optimized CSP filter. In majority of the cases the latter produces better performance in terms of the overall classification accuracy
    corecore