65 research outputs found

    Breathing pattern characterization in patients with respiratory and cardiac failure

    Get PDF
    El objetivo principal de la tesis es estudiar los patrones respiratorios de pacientes en proceso de extubación y pacientes con insuficiencia cardiaca crónica (CHF), a partirde la señal de flujo respiratorio. La información obtenida de este estudio puede contribuir a la comprensión de los procesos fisiológicos subyacentes,y ayudar en el diagnóstico de estos pacientes. Uno de los problemas más desafiantes en unidades de cuidados intensivos es elproceso de desconexión de pacientes asistidos mediante ventilación mecánica. Más del 10% de pacientes que se extuban tienen que ser reintubados antes de 48 horas. Una prueba fallida puede ocasionar distrés cardiopulmonar y una mayor tasa de mortalidad. Se caracterizó el patrón respiratorio y la interacción dinámica entre la frecuenciacardiaca y frecuencia respiratoria, para obtener índices no invasivos que proporcionen una mayor información en el proceso de destete y mejorar el éxito de la desconexión.Las señales de flujo respiratorio y electrocardiográfica utilizadas en este estudio fueron obtenidas durante 30 minutos aplicando la prueba de tubo en T. Se compararon94 pacientes que tuvieron éxito en el proceso de extubación (GE), 39 pacientes que fracasaron en la prueba al mantener la respiración espontánea (GF), y 21 pacientes quesuperaron la prueba con éxito y fueron extubados, pero antes de 48 horas tuvieron que ser reintubados (GR). El patrón respiratorio se caracterizó a partir de las series temporales. Se aplicó la dinámica simbólica conjunta a las series correspondientes a las frecuencias cardiaca y respiratoria, para describir las interacciones cardiorrespiratoria de estos pacientes. Técnicas de "clustering", ecualización del histograma, clasificación mediante máquinasde soporte vectorial (SVM) y técnicas de validación permitieron seleccionar el conjunto de características más relevantes. Se propuso una nueva métrica B (índice de equilibrio) para la optimización de la clasificación con muestras desbalanceadas. Basado en este nuevo índice, aplicando SVM, se seleccionaron las mejores características que mantenían el mejor equilibrio entre sensibilidad y especificidad en todas las clasificaciones. El mejor resultado se obtuvo considerando conjuntamente la precisión y el valor de B, con una clasificación del 80% entre los grupos GE y GF, con 6 características. Clasificando GE vs. el resto de los pacientes, el mejor resultado se obtuvo con 9 características, con 81%. Clasificando GR vs. GE y GR vs. el resto de pacientes la precisión fue del 83% y 81% con 9 y 10 características, respectivamente. La tasa de mortalidad en pacientes con CHF es alta y la estratificación de estospacientes en función del riesgo es uno de los principales retos de la cardiología contemporánea. Estos pacientes a menudo desarrollan patrones de respiraciónperiódica (PB) incluyendo la respiración de Cheyne-Stokes (CSR) y respiración periódica sin apnea. La respiración periódica en estos pacientes se ha asociadocon una mayor mortalidad, especialmente en pacientes con CSR. Por lo tanto, el estudio de estos patrones respiratorios podría servir como un marcador de riesgo y proporcionar una mayor información sobre el estado fisiopatológico de pacientes con CHF. Se pretende identificar la condición de los pacientes con CHFde forma no invasiva mediante la caracterización y clasificación de patrones respiratorios con PBy respiración no periódica (nPB), y patrón de sujetos sanos, a partir registros de 15minutos de la señal de flujo respiratorio. Se caracterizó el patrón respiratorio mediante un estudio tiempo-frecuencia estacionario y no estacionario, de la envolvente de la señal de flujo respiratorio. Parámetros relacionados con la potencia espectral de la envolvente de la señal presentaron losmejores resultados en la clasificación de sujetos sanos y pacientes con CHF con CSR, PB y nPB. Las curvas ROC validan los resultados obtenidos. Se aplicó la "correntropy" para una caracterización tiempo-frecuencia mas completa del patrón respiratorio de pacientes con CHF. La "corretronpy" considera los momentos estadísticos de orden superior, siendo más robusta frente a los "outliers". Con la densidad espectral de correntropy (CSD) tanto la frecuencia de modulación como la dela respiración se representan en su posición real en el eje frecuencial. Los pacientes con PB y nPB, presentan diferentesgrados de periodicidad en función de su condición, mientras que los sujetos sanos no tienen periodicidad marcada. Con único parámetro se obtuvieron resultados del 88.9% clasificando pacientes PB vs. nPB, 95.2% para CHF vs. sanos, 94.4% para nPB vs. sanos.The main objective of this thesis is to study andcharacterize breathing patterns through the respiratory flow signal applied to patients on weaning trials from mechanicalventilation and patients with chronic heart failure (CHF). The aim is to contribute to theunderstanding of the underlying physiological processes and to help in the diagnosis of these patients. One of the most challenging problems in intensive care units is still the process ofdiscontinuing mechanical ventilation, as over 10% of patients who undergo successfulT-tube trials have to be reintubated in less than 48 hours. A failed weaning trial mayinduce cardiopulmonary distress and carries a higher mortality rate. We characterize therespiratory pattern and the dynamic interaction between heart rate and breathing rate toobtain noninvasive indices that provide enhanced information about the weaningprocess and improve the weaning outcome. This is achieved through a comparison of 94 patients with successful trials (GS), 39patients who fail to maintain spontaneous breathing (GF), and 21 patients who successfully maintain spontaneous breathing and are extubated, but require thereinstitution of mechanical ventilation in less than 48 hours because they are unable tobreathe (GR). The ECG and the respiratory flow signals used in this study were acquired during T-tube tests and last 30 minute. The respiratory pattern was characterized by means of a number of respiratory timeseries. Joint symbolic dynamics applied to time series of heart rate and respiratoryfrequency was used to describe the cardiorespiratory interactions of patients during theweaning trial process. Clustering, histogram equalization, support vector machines-based classification (SVM) and validation techniques enabled the selection of the bestsubset of input features. We defined a new optimization metric for unbalanced classification problems, andestablished a new SVM feature selection method, based on this balance index B. The proposed B-based SVM feature selection provided a better balance between sensitivityand specificity in all classifications. The best classification result was obtained with SVM feature selection based on bothaccuracy and the balance index, which classified GS and GFwith an accuracy of 80%, considering 6 features. Classifying GS versus the rest of patients, the best result wasobtained with 9 features, 81%, and the accuracy classifying GR versus GS, and GR versus the rest of the patients was 83% and 81% with 9 and 10 features, respectively.The mortality rate in CHF patients remains high and risk stratification in these patients isstill one of the major challenges of contemporary cardiology. Patients with CHF oftendevelop periodic breathing patterns including Cheyne-Stokes respiration (CSR) and periodic breathing without apnea. Periodic breathing in CHF patients is associated withincreased mortality, especially in CSR patients. Therefore it could serve as a risk markerand can provide enhanced information about thepathophysiological condition of CHF patients. The main goal of this research was to identify CHF patients' condition noninvasively bycharacterizing and classifying respiratory flow patterns from patients with PB and nPBand healthy subjects by using 15-minute long respiratory flow signals. The respiratory pattern was characterized by a stationary and a nonstationary time-frequency study through the envelope of the respiratory flow signal. Power-related parameters achieved the best results in all of the classifications involving healthy subjects and CHF patients with CSR, PB and nPB and the ROC curves validated theresults obtained for the identification of different respiratory patterns. We investigated the use of correntropy for the spectral characterization of respiratory patterns in CHF patients. The correntropy function accounts for higher-order moments and is robust to outliers. Due to the former property, the respiratory and modulationfrequencies appear at their actual locations along the frequency axis in the correntropy spectral density (CSD). The best results were achieved with correntropy and CSD-related parameters that characterized the power in the modulation and respiration discriminant bands, definedas a frequency interval centred on the modulation and respiration frequency peaks,respectively. All patients, i.e. both PB and nPB, exhibit various degrees of periodicitydepending on their condition, whereas healthy subjects have no pronounced periodicity.This fact led to excellent results classifying PB and nPB patients 88.9%, CHF versushealthy 95.2%, and nPB versus healthy 94.4% with only one parameter.Postprint (published version

    Computing Network of Diseases and Pharmacological Entities through the Integration of Distributed Literature Mining and Ontology Mapping

    Get PDF
    The proliferation of -omics (such as, Genomics, Proteomics) and -ology (such as, System Biology, Cell Biology, Pharmacology) have spawned new frontiers of research in drug discovery and personalized medicine. A vast amount (21 million) of published research results are archived in the PubMed and are continually growing in size. To improve the accessibility and utility of such a large number of literatures, it is critical to develop a suit of semantic sensitive technology that is capable of discovering knowledge and can also infer possible new relationships based on statistical co-occurrences of meaningful terms or concepts. In this context, this thesis presents a unified framework to mine a large number of literatures through the integration of latent semantic analysis (LSA) and ontology mapping. In particular, a parameter optimized, robust, scalable, and distributed LSA (DiLSA) technique was designed and implemented on a carefully selected 7.4 million PubMed records related to pharmacology. The DiLSA model was integrated with MeSH to make the model effective and efficient for a specific domain. An optimized multi-gram dictionary was customized by mapping the MeSH to build the DiLSA model. A fully integrated web-based application, called PharmNet, was developed to bridge the gap between biological knowledge and clinical practices. Preliminary analysis using the PharmNet shows an improved performance over global LSA model. A limited expert evaluation was performed to validate the retrieved results and network with biological literatures. A thorough performance evaluation and validation of results is in progress

    Novas estratégias de pré-processamento, extração de atributos e classificação em sistemas BCI

    Get PDF
    Orientador: Romis Ribeiro de Faissol AttuxTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: As interfaces cérebro-computador (BCIs) visam controlar um dispositivo externo, utilizando diretamente os sinais cerebrais do usuário. Tais sistemas requerem uma série de etapas para processar e extrair atributos relevantes dos sinais observados para interpretar correta e eficientemente as intenções do usuário. Embora o campo tenha se desenvolvido continuamente e algumas dificuldades tenham sido superadas, ainda é necessário aumentar a capacidade de uso, melhorando sua capacidade de classificação e aumentando a confiabilidade de sua resposta. O objetivo clássico da pesquisa de BCI é apoiar a comunicação e o controle para usuários com comunicação prejudicada devido a doenças ou lesões. Aplicações típicas das BCI são a operação de cursores de interface, programas de escrita de texto ou dispositivos externos, como cadeiras de rodas, robôs e diferentes tipos de próteses. O usuário envia informações moduladas para a BCI, realizando tarefas mentais que produzem padrões cerebrais distintos. A BCI adquire sinais do cérebro do usuário e os traduz em comunicação adequada. Esta tese tem como objetivo desenvolver uma comunicação BCI não invasiva mais rápida e confiável baseada no estudo de diferentes técnicas que atuam nas etapas de processamento do sinal, considerando dois aspectos principais, a abordagem de aprendizado de máquina e a redução da complexidade na tarefa de aprendizado dos padrões mentais pelo usuário. A pesquisa foi focada em dois paradigmas de BCI, Imagética Motora (IM) e o potencial relacionado ao evento P300. Algoritmos de processamento de sinais para a detecção de ambos os padrões cerebrais foram aplicados e avaliados. O aspecto do pré-processamento foi a primeira perspectiva estudada, considerando como destacar a resposta dos fenômenos cerebrais, em relação ao ruído e a outras fontes de informação que talvez distorçam o sinal de EEG; isso em si é um passo que influenciará diretamente a resposta dos seguintes blocos de processamento e classificação. A Análise de Componente Independente (ICA) foi usada em conjunto com métodos de seleção de atributos e diferentes classificadores para separar as fontes originais relacionadas à dessincronização produzida pelo fenômeno de IM; esta foi uma tentativa de criar um tipo de filtro espacial que permitisse o sinal ser pré-processado, reduzindo a influência do ruído. Além disso, os resultados dos valores de classificação foram analisados considerando a comparação com métodos padrão de pré-processamento, como o filtro CAR. Os resultados mostraram que é possível separar os componentes relacionados à atividade motora. A proposta da ICA, em média, foi 4\% mais alta em porcentagem de precisão de classificação do que os resultados obtidos usando o CAR, ou quando nenhum filtro foi usado. O papel dos métodos que estudam a conectividade de diferentes áreas do cérebro foi avaliado como a segunda contribuição deste trabalho; Isso permitiu considerar aspectos que contemplam a complexidade da resposta cerebral de um usuário. A área da BCI precisa de uma interpretação mais profunda do que acontece no nível do cérebro em vários dos fenômenos estudados. A técnica utilizada para construir grafos de conectividade funcional foi a correntropia, esta medida foi utilizada para quantificar a similaridade; uma comparação foi feita usando também, as medidas de correlação de Spearman e Pearson. A conectividade funcional relaciona diferentes áreas do cérebro analisando sua atividade cerebral, de modo que o estudo do grafo foi avaliado utilizando três medidas de centralidade, onde a importância de um nó na rede é medida. Também, dois tipos de classificadores foram testados, comparando os resultados no nível de precisão de classificação. Em conclusão, a correntropia pode trazer mais informações para o estudo da conectividade do que o uso da correlação simples, o que trouxe melhorias nos resultados da classificação, especialmente quando ela foi utilizada com o classificador ELM. Finalmente, esta tese demonstra que os BCIs podem fornecer comunicação efetiva em uma aplicação onde a predição da resposta de classificação foi modelada, o que permitiu a otimização dos parâmetros do processamento de sinal realizado usando o filtro espacial xDAWN e um classificador FLDA para o problema do speller P300, buscando a melhor resposta para cada usuário. O modelo de predição utilizado foi Bayesiano e confirmou os resultados obtidos com a operação on-line do sistema, permitindo otimizar os parâmetros tanto do filtro quanto do classificador. Desta forma, foi visto que usando filtros com poucos canais de entrada, o modelo otimizado deu melhores resultados de acurácia de classificação do que os valores inicialmente obtidos ao treinar o filtro xDAWN para os mesmos casos. Os resultados obtidos mostraram que melhorias nos métodos do transdutor BCI, no pré-processamento, extração de características e classificação constituíram a base para alcançar uma comunicação BCI mais rápida e confiável. O avanço nos resultados da classificação foi obtido em todos os casos, comparado às técnicas que têm sido amplamente utilizadas e já mostraram eficácia para esse tipo de problema. No entanto, ainda há aspectos a considerar da resposta dos sujeitos para tipos específicos de paradigmas, lembrando que sua resposta pode variar ao longo de diferentes dias e as implicações reais disso na definição e no uso de diferentes métodos de processamento de sinalAbstract: Brain-computer interfaces (BCIs) aim to control an external device by directly employing user's brain signals. Such systems require a series of steps to process and extract relevant features from the observed signals to correctly and efficiently interpret the user's intentions. Although the field has been continuously developing and some difficulties have been overcome, it is still necessary to increase usability by enhancing their classification capacity and increasing the reliability of their response. The classical objective of BCI research is to support communication and control for users with impaired communication due to illness or injury. Typical BCI applications are the operation of interface cursors, spelling programs or external devices, such as wheelchairs, robots and different types of prostheses. The user sends modulated information to the BCI by engaging in mental tasks that produce distinct brain patterns. The BCI acquires signals from the user¿s brain and translates them into suitable communication. This thesis aims to develop faster and more reliable non-invasive BCI communication based on the study of different techniques that serve in the signal processing stages, considering two principal aspects, the machine learning approach, and the reduction of the complexity in the task of learning the mental patterns by the user. Research was focused on two BCI paradigms, Motor Imagery (MI) and the P300 event related potential (ERP). Signal processing algorithms for the detection of both brain patterns were applied and evaluated. The aspect of the pre-processing was the first perspective studied to consider how to highlight the response of brain phenomena, in relation to noise and other sources of information that maybe distorting the EEG signal; this in itself is a step that will directly influence the response of the following blocks of processing and classification. The Independent Component Analysis (ICA) was used in conjunction with feature selection methods and different classifiers to separate the original sources that are related to the desynchronization produced by MI phenomenon; an attempt was made to create a type of spatial filter that pre-processed the signal, reducing the influence of the noise. Furthermore, some of the classifications values were analyzed considering comparison when used other standard pre-processing methods, as the CAR filter. The results showed that it is possible to separate the components related to motor activity. The ICA proposal on average were 4\% higher in percent of classification accuracy than those obtained using CAR, or when no filter was used. The role of methods that study the connectivity of different brain areas were evaluated as the second contribution of this work; this allowed to consider aspects that contemplate the complexity of the brain response of a user. The area of BCI needs a deeper interpretation of what happens at the brain level in several of the studied phenomena. The technique used to build functional connectivity graphs was correntropy, this quantity was used to measure similarity, a comparison was made using also, the Spearman and Pearson correlation. Functional connectivity relates different brain areas activity, so the study of the graph was evaluated using three measures of centrality of graph, where the importance of a node in the network is measured. In addition, two types of classifiers were tested, comparing the results at the level of classification precision. In conclusion, the correntropy can bring more information for the study of connectivity than the use of the simple correlation, which brought improvements in the classification results especially when it was used with the ELM classifier. Finally, this thesis demonstrates that BCIs can provide effective communication in an application where the prediction of the classification response was modeled, which allowed the optimization of the parameters of the signal processing performed using the xDAWN spatial filter and a FLDA classifier for the problem of the P300 speller, seeking the best response for each user. The prediction model used was Bayesian and confirmed the results obtained with the on-line operation of the system, thus allowing to optimize the parameters of both the filter and the classifier. In this way it was seen that using filters with few inputs the optimized model gave better results of acuraccy classification than the values initially obtained when the training ofthe xDAWN filter was made for the same cases. The obtained results showed that improvements in the BCI transducer, pre-processing, feature extraction and classification methods constituted the basis to achieve faster and more reliable BCI communication. The advance in the classification results were obtained in all cases, compared to techniques that have been widely used and had already shown effectiveness for this type of problemsDoutoradoEngenharia de ComputaçãoDoutora em Engenharia Elétrica153311/2014-2CNP

    Novel Deep Learning Techniques For Computer Vision and Structure Health Monitoring

    Get PDF
    This thesis proposes novel techniques in building a generic framework for both the regression and classification tasks in vastly different applications domains such as computer vision and civil engineering. Many frameworks have been proposed and combined into a complex deep network design to provide a complete solution to a wide variety of problems. The experiment results demonstrate significant improvements of all the proposed techniques towards accuracy and efficiency

    Novel Computational Methods for State Space Filtering

    Get PDF
    The state-space formulation for time-dependent models has been long used invarious applications in science and engineering. While the classical Kalman filter(KF) provides optimal posterior estimation under linear Gaussian models, filteringin nonlinear and non-Gaussian environments remains challenging.Based on the Monte Carlo approximation, the classical particle filter (PF) can providemore precise estimation under nonlinear non-Gaussian models. However, it suffers fromparticle degeneracy. Drawing from optimal transport theory, the stochastic map filter(SMF) accommodates a solution to this problem, but its performance is influenced bythe limited flexibility of nonlinear map parameterisation. To account for these issues,a hybrid particle-stochastic map filter (PSMF) is first proposed in this thesis, wherethe two parts of the split likelihood are assimilated by the PF and SMF, respectively.Systematic resampling and smoothing are employed to alleviate the particle degeneracycaused by the PF. Furthermore, two PSMF variants based on the linear and nonlinearmaps (PSMF-L and PSMF-NL) are proposed, and their filtering performance is comparedwith various benchmark filters under different nonlinear non-Gaussian models.Although achieving accurate filtering results, the particle-based filters require expensive computations because of the large number of samples involved. Instead, robustKalman filters (RKFs) provide efficient solutions for the linear models with heavy-tailednoise, by adopting the recursive estimation framework of the KF. To exploit the stochasticcharacteristics of the noise, the use of heavy-tailed distributions which can fit variouspractical noises constitutes a viable solution. Hence, this thesis also introduces a novelRKF framework, RKF-SGαS, where the signal noise is assumed to be Gaussian and theheavy-tailed measurement noise is modelled by the sub-Gaussian α-stable (SGαS) distribution. The corresponding joint posterior distribution of the state vector and auxiliaryrandom variables is estimated by the variational Bayesian (VB) approach. Four differentminimum mean square error (MMSE) estimators of the scale function are presented.Besides, the RKF-SGαS is compared with the state-of-the-art RKFs under three kinds ofheavy-tailed measurement noises, and the simulation results demonstrate its estimationaccuracy and efficiency.One notable limitation of the proposed RKF-SGαS is its reliance on precise modelparameters, and substantial model errors can potentially impede its filtering performance. Therefore, this thesis also introduces a data-driven RKF method, referred to asRKFnet, which combines the conventional RKF framework with a deep learning technique. An unsupervised scheduled sampling technique (USS) is proposed to improve theistability of the training process. Furthermore, the advantages of the proposed RKFnetare quantified with respect to various traditional RKFs

    MINING MULTI-GRANULAR MULTIVARIATE MEDICAL MEASUREMENTS

    Get PDF
    This thesis is motivated by the need to predict the mortality of patients in the Intensive Care Unit. The heart of this problem revolves around being able to accurately classify multivariate, multi-granular time series patient data. The approach ultimately taken in this thesis involves using Z-Score normalization to make variables comparable, Single Value Decomposition to reduce the number of features, and a Support Vector Machine to classify patient tuples. This approach proves to outperform other classification models such as k-Nearest Neighbor and demonstrates that SVM is a viable model for this project. The hope is that going forward other work can build off of this research and one day make an impact in the medical community

    Robust Face Recognition based on Color and Depth Information

    Get PDF
    One of the most important advantages of automatic human face recognition is its nonintrusiveness property. Face images can sometime be acquired without user's knowledge or explicit cooperation. However, face images acquired in an uncontrolled environment can appear with varying imaging conditions. Traditionally, researchers focus on tackling this problem using 2D gray-scale images due to the wide availability of 2D cameras and the low processing and storage cost of gray-scale data. Nevertheless, face recognition can not be performed reliably with 2D gray-scale data due to insu_cient information and its high sensitivity to pose, expression and illumination variations. Recent rapid development in hardware makes acquisition and processing of color and 3D data feasible. This thesis aims to improve face recognition accuracy and robustness using color and 3D information.In terms of color information usage, this thesis proposes several improvements over existing approaches. Firstly, the Block-wise Discriminant Color Space is proposed, which learns the discriminative color space based on local patches of a human face image instead of the holistic image, as human faces display different colors in different parts. Secondly, observing that most of the existing color spaces consist of at most three color components, while complementary information can be found in multiple color components across multiple color spaces and therefore the Multiple Color Fusion model is proposed to search and utilize multiple color components effectively. Lastly, two robust color face recognition algorithms are proposed. The Color Sparse Coding method can deal with face images with noise and occlusion. The Multi-linear Color Tensor Discriminant method harnesses multi-linear technique to handle non-linear data. Experiments show that all the proposed methods outperform their existing competitors.In terms of 3D information utilization, this thesis investigates the feasibility of face recognition using Kinect. Unlike traditional 3D scanners which are too slow in speed and too expensive in cost for broad face recognition applications, Kinect trades data quality for high speed and low cost. An algorithm is proposed to show that Kinect data can be used for face recognition despite its noisy nature. In order to fully utilize Kinect data, a more sophisticated RGB-D face recognition algorithm is developed which harnesses theColor Sparse Coding framework and 3D information to perform accurate face recognition robustly even under simultaneous varying conditions of poses, illuminations, expressionsand disguises
    corecore