1,040 research outputs found

    AUTOMATED ARTIFACT REMOVAL AND DETECTION OF MILD COGNITIVE IMPAIRMENT FROM SINGLE CHANNEL ELECTROENCEPHALOGRAPHY SIGNALS FOR REAL-TIME IMPLEMENTATIONS ON WEARABLES

    Get PDF
    Electroencephalogram (EEG) is a technique for recording asynchronous activation of neuronal firing inside the brain with non-invasive scalp electrodes. EEG signal is well studied to evaluate the cognitive state, detect brain diseases such as epilepsy, dementia, coma, autism spectral disorder (ASD), etc. In this dissertation, the EEG signal is studied for the early detection of the Mild Cognitive Impairment (MCI). MCI is the preliminary stage of Dementia that may ultimately lead to Alzheimers disease (AD) in the elderly people. Our goal is to develop a minimalistic MCI detection system that could be integrated to the wearable sensors. This contribution has three major aspects: 1) cleaning the EEG signal, 2) detecting MCI, and 3) predicting the severity of the MCI using the data obtained from a single-channel EEG electrode. Artifacts such as eye blink activities can corrupt the EEG signals. We investigate unsupervised and effective removal of ocular artifact (OA) from single-channel streaming raw EEG data. Wavelet transform (WT) decomposition technique was systematically evaluated for effectiveness of OA removal for a single-channel EEG system. Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), is studied with four WT basis functions: haar, coif3, sym3, and bior4.4. The performance of the artifact removal algorithm was evaluated by the correlation coefficients (CC), mutual information (MI), signal to artifact ratio (SAR), normalized mean square error (NMSE), and time-frequency analysis. It is demonstrated that WT can be an effective tool for unsupervised OA removal from single channel EEG data for real-time applications.For the MCI detection from the clean EEG data, we collected the scalp EEG data, while the subjects were stimulated with five auditory speech signals. We extracted 590 features from the Event-Related Potential (ERP) of the collected EEG signals, which included time and spectral domain characteristics of the response. The top 25 features, ranked by the random forest method, were used for classification models to identify subjects with MCI. Robustness of our model was tested using leave-one-out cross-validation while training the classifiers. Best results (leave-one-out cross-validation accuracy 87.9%, sensitivity 84.8%, specificity 95%, and F score 85%) were obtained using support vector machine (SVM) method with Radial Basis Kernel (RBF) (sigma = 10, cost = 102). Similar performances were also observed with logistic regression (LR), further validating the results. Our results suggest that single-channel EEG could provide a robust biomarker for early detection of MCI. We also developed a single channel Electro-encephalography (EEG) based MCI severity monitoring algorithm by generating the Montreal Cognitive Assessment (MoCA) scores from the features extracted from EEG. We performed multi-trial and single-trail analysis for the algorithm development of the MCI severity monitoring. We studied Multivariate Regression (MR), Ensemble Regression (ER), Support Vector Regression (SVR), and Ridge Regression (RR) for multi-trial and deep neural regression for the single-trial analysis. In the case of multi-trial, the best result was obtained from the ER. In our single-trial analysis, we constructed the time-frequency image from each trial and feed it to the convolutional deep neural network (CNN). Performance of the regression models was evaluated by the RMSE and the residual analysis. We obtained the best accuracy with the deep neural regression method

    Information Maximizing Component Analysis of Left Ventricular Remodeling Due to Myocardial Infarction

    Get PDF
    Background: Although adverse left ventricular shape changes (remodeling) after myocardial infarction (MI) are predictive of morbidity and mortality, current clinical assessment is limited to simple mass and volume measures, or dimension ratios such as length to width ratio. We hypothesized that information maximizing component analysis (IMCA), a supervised feature extraction method, can provide more efficient and sensitive indices of overall remodeling. Methods: IMCA was compared to linear discriminant analysis (LDA), both supervised methods, to extract the most discriminatory global shape changes associated with remodeling after MI. Finite element shape models from 300 patients with myocardial infarction from the DETERMINE study (age 31–86, mean age 63, 20 % women) were compared with 1991 asymptomatic cases from the MESA study (age 44–84, mean age 62, 52 % women) available from the Cardiac Atlas Project. IMCA and LDA were each used to identify a single mode of global remodeling best discriminating the two groups. Logistic regression was employed to determine the association between the remodeling index and MI. Goodness-of-fit results were compared against a baseline logistic model comprising standard clinical indices. Results: A single IMCA mode simultaneously describing end-diastolic and end-systolic shapes achieved best results (lowest Deviance, Akaike information criterion and Bayesian information criterion, and the largest area under the receiver-operating-characteristic curve). This mode provided a continuous scale where remodeling can be quantified and visualized, showing that MI patients tend to present larger size and more spherical shape, more bulging of the apex, and thinner wall thickness. Conclusions: IMCA enables better characterization of global remodeling than LDA, and can be used to quantify progression of disease and the effect of treatment. These data and results are available from the Cardiac Atlas Project (http://www.cardiacatlas.org)

    Imaging-based representation and stratification of intra-tumor heterogeneity via tree-edit distance

    Get PDF
    Personalized medicine is the future of medical practice. In oncology, tumor heterogeneity assessment represents a pivotal step for effective treatment planning and prognosis prediction. Despite new procedures for DNA sequencing and analysis, non-invasive methods for tumor characterization are needed to impact on daily routine. On purpose, imaging texture analysis is rapidly scaling, holding the promise to surrogate histopathological assessment of tumor lesions. In this work, we propose a tree-based representation strategy for describing intra-tumor heterogeneity of patients affected by metastatic cancer. We leverage radiomics information extracted from PET/CT imaging and we provide an exhaustive and easily readable summary of the disease spreading. We exploit this novel patient representation to perform cancer subtyping according to hierarchical clustering technique. To this purpose, a new heterogeneity-based distance between trees is defined and applied to a case study of prostate cancer. Clusters interpretation is explored in terms of concordance with severity status, tumor burden and biological characteristics. Results are promising, as the proposed method outperforms current literature approaches. Ultimately, the proposed method draws a general analysis framework that would allow to extract knowledge from daily acquired imaging data of patients and provide insights for effective treatment planning

    Imaging-based representation and stratification of intra-tumor heterogeneity via tree-edit distance

    Get PDF
    Personalized medicine is the future of medical practice. In oncology, tumor heterogeneity assessment represents a pivotal step for effective treatment planning and prognosis prediction. Despite new procedures for DNA sequencing and analysis, non-invasive methods for tumor characterization are needed to impact on daily routine. On purpose, imaging texture analysis is rapidly scaling, holding the promise to surrogate histopathological assessment of tumor lesions. In this work, we propose a tree-based representation strategy for describing intra-tumor heterogeneity of patients affected by metastatic cancer. We leverage radiomics information extracted from PET/CT imaging and we provide an exhaustive and easily readable summary of the disease spreading. We exploit this novel patient representation to perform cancer subtyping according to hierarchical clustering technique. To this purpose, a new heterogeneity-based distance between trees is defined and applied to a case study of prostate cancer. Clusters interpretation is explored in terms of concordance with severity status, tumor burden and biological characteristics. Results are promising, as the proposed method outperforms current literature approaches. Ultimately, the proposed method draws a general analysis framework that would allow to extract knowledge from daily acquired imaging data of patients and provide insights for effective treatment planning

    Constrained manifold learning for the characterization of pathological deviations from normality

    Get PDF
    International audienceThis paper describes a technique to (1) learn the representation of a pathological motion pattern from a given population, and (2) compare individuals to this population. Our hypothesis is that this pattern can be modeled as a deviation from normal motion by means of non-linear embedding techniques. Each subject is represented by a 2D map of local motion abnormalities, obtained from a statistical atlas of myocardial motion built from a healthy population. The algorithm estimates a manifold from a set of patients with varying degrees of the same disease, and compares individuals to the training population using a mapping to the manifold and a distance to normality along the manifold. The approach extends recent manifold learning techniques by constraining the manifold to pass by a physiologically meaningful origin representing a normal motion pattern. Interpolation techniques using locally adjustable kernel improve the accuracy of the method. The technique is applied in the context of cardiac resynchronization therapy (CRT), focusing on a specific motion pattern of intra-ventricular dyssynchrony called septal flash (SF). We estimate the manifold from 50 CRT candidates with SF and test it on 37 CRT candidates and 21 healthy volunteers. Experiments highlight the relevance of nonlinear techniques to model a pathological pattern from the training set and compare new individuals to this pattern

    Use of advanced analytics for health estimation and failure prediction in wind turbines

    Get PDF
    Tesi en modalitat de tesi per compendiThe energy sector has undergone drastic changes and critical revolutions in the last few decades. Renewable energy sources have grown significantly, now representing a sizeable share of the energy production mix. Wind energy has seen increasing rate of adoptions, being one of the more convenient and sustainable mean of producing energy. Research and innovation have helped greatly in driving down production and operation costs of wind energy, yet important challenges still remain open. This thesis addresses predictive maintenance and monitoring of wind turbines, aiming to present predictive frameworks designed with the necessities of the industry in mind. More concretely: interpretability, scalability, modularity and reliability of the predictions are the objectives —together with limited data requirements— of this project. Of all the available data at the disposal of wind turbine operators, SCADA is the principal source of information utilized in this research, due to its wide availability and low cost. Ensemble models played an important role in the development of the presented predictive frameworks thanks to their modular nature which allows to combine very diverse algorithms and data types. Important insights gained from these experiments are the beneficial effect of combining multiple and diverse sources of data —for example SCADA and alarms logs—, the easiness of combining different algorithms and indicators, and the noticeable gain in predicting performance that it can provide. Finally, given the central role that SCADA data plays in this thesis, but also in the wind energy industry, a detailed analysis of the limitations and shortcomings of SCADA data is presented. In particular, the ef- fect of data aggregation —a common practice in the wind industry— is determined developing a methodological framework that has been used to study high–frequency SCADA data. This lead to the conclusion that typical aggregation periods, i.e. 5–10 minutes that are the standard in wind energy industry are not able to capture and maintain the information content of fast–changing signals, such as wind and electrical measurements.El sector energètic ha experimentat importants canvis i revolucions en les últimes dècades. Les fonts d’energia renovables han crescut significativament, i ara representen una part important en el conjunt de generació. L’energia eòlica ha augmentat significativament, convertint-se en una de les millors alternatives per produir energia verda. La recerca i la innovació ha ajudat a reduir considerablement els costos de producció i operació de l’energia eòlica, però encara hi ha oberts reptes importants. Aquesta tesi aborda el manteniment predictiu i el seguiment d’aerogeneradors, amb l’objectiu de presentar solucions d’algoritmes de predicció dissenyats tenint en compte les necessitats de la indústria. Més concretament conceptes com, la interpretabilitat, escalabilitat, modularitat i fiabilitat de les prediccions ho són els objectius, juntament amb els requisits limitats per les de dades disponibles d’aquest projecte. De totes les dades disponibles a disposició dels operadors d’aerogeneradors, les dades del sistema SCADA són la principal font d’informació utilitzada en aquest projecte, per la seva àmplia disponibilitat i baix cost. En el present treball, els models de conjunt tenen un paper important en el desenvolupament dels marcs predictius presentats gràcies al seu caràcter modular que permet l’ús d’algoritmes i tipus de dades molt diversos. Resultats importants obtinguts d’aquests experiments són l’efecte beneficiós de combinar múltiples i diverses fonts de dades, per exemple, SCADA i dades d’alarmes, la facilitat de combinar diferents algorismes i indicadors i el notable guany en predir el rendiment que es pot oferir. Finalment, donat el paper central que SCADA l’anàlisi de dades juga en aquesta tesi, però també en la indústria de l’energia eòlica, una anàlisi detallada de la es presenten les limitacions i les mancances de les dades SCADA. En particular es va estudiar l’efecte de l’agregació de dades -una pràctica habitual en la indústria eòlica-. Dins d’aquest treball es proposa un marc metodològic que s’ha utilitzat per estudiar dades SCADA d’alta freqüència. Això va portar a la conclusió que els períodes d’agregació típics, de 5 a 10 minuts que són l’estàndard a la indústria de l’energia eòlica, no són capaços de capturar i mantenir el contingut d’informació de senyals que canvien ràpidament, com ara mesures eòliques i elèctriquesPostprint (published version

    A Framework for the Verification and Validation of Artificial Intelligence Machine Learning Systems

    Get PDF
    An effective verification and validation (V&V) process framework for the white-box and black-box testing of artificial intelligence (AI) machine learning (ML) systems is not readily available. This research uses grounded theory to develop a framework that leads to the most effective and informative white-box and black-box methods for the V&V of AI ML systems. Verification of the system ensures that the system adheres to the requirements and specifications developed and given by the major stakeholders, while validation confirms that the system properly performs with representative users in the intended environment and does not perform in an unexpected manner. Beginning with definitions, descriptions, and examples of ML processes and systems, the research results identify a clear and general process to effectively test these systems. The developed framework ensures the most productive and accurate testing results. Formerly, and occasionally still, the system definition and requirements exist in scattered documents that make it difficult to integrate, trace, and test through V&V. Modern system engineers along with system developers and stakeholders collaborate to produce a full system model using model-based systems engineering (MBSE). MBSE employs a Unified Modeling Language (UML) or System Modeling Language (SysML) representation of the system and its requirements that readily passes from each stakeholder for system information and additional input. The comprehensive and detailed MBSE model allows for direct traceability to the system requirements. xxiv To thoroughly test a ML system, one performs either white-box or black-box testing or both. Black-box testing is a testing method in which the internal model structure, design, and implementation of the system under test is unknown to the test engineer. Testers and analysts are simply looking at performance of the system given input and output. White-box testing is a testing method in which the internal model structure, design, and implementation of the system under test is known to the test engineer. When possible, test engineers and analysts perform both black-box and white-box testing. However, sometimes testers lack authorization to access the internal structure of the system. The researcher captures this decision in the ML framework. No two ML systems are exactly alike and therefore, the testing of each system must be custom to some degree. Even though there is customization, an effective process exists. This research includes some specialized methods, based on grounded theory, to use in the testing of the internal structure and performance. Through the study and organization of proven methods, this research develops an effective ML V&V framework. Systems engineers and analysts are able to simply apply the framework for various white-box and black-box V&V testing circumstances

    Contribution of CT-Scan Analysis by Artificial Intelligence to the Clinical Care of TBI Patients

    Get PDF
    The gold standard to diagnose intracerebral lesions after traumatic brain injury (TBI) is computed tomography (CT) scan, and due to its accessibility and improved quality of images, the global burden of CT scan for TBI patients is increasing. The recent developments of automated determination of traumatic brain lesions and medical-decision process using artificial intelligence (AI) represent opportunities to help clinicians in screening more patients, identifying the nature and volume of lesions and estimating the patient outcome. This short review will summarize what is ongoing with the use of AI and CT scan for patients with TBI
    • …
    corecore