75 research outputs found

    Breathing pattern characterization in patients with respiratory and cardiac failure

    Get PDF
    El objetivo principal de la tesis es estudiar los patrones respiratorios de pacientes en proceso de extubación y pacientes con insuficiencia cardiaca crónica (CHF), a partirde la señal de flujo respiratorio. La información obtenida de este estudio puede contribuir a la comprensión de los procesos fisiológicos subyacentes,y ayudar en el diagnóstico de estos pacientes. Uno de los problemas más desafiantes en unidades de cuidados intensivos es elproceso de desconexión de pacientes asistidos mediante ventilación mecánica. Más del 10% de pacientes que se extuban tienen que ser reintubados antes de 48 horas. Una prueba fallida puede ocasionar distrés cardiopulmonar y una mayor tasa de mortalidad. Se caracterizó el patrón respiratorio y la interacción dinámica entre la frecuenciacardiaca y frecuencia respiratoria, para obtener índices no invasivos que proporcionen una mayor información en el proceso de destete y mejorar el éxito de la desconexión.Las señales de flujo respiratorio y electrocardiográfica utilizadas en este estudio fueron obtenidas durante 30 minutos aplicando la prueba de tubo en T. Se compararon94 pacientes que tuvieron éxito en el proceso de extubación (GE), 39 pacientes que fracasaron en la prueba al mantener la respiración espontánea (GF), y 21 pacientes quesuperaron la prueba con éxito y fueron extubados, pero antes de 48 horas tuvieron que ser reintubados (GR). El patrón respiratorio se caracterizó a partir de las series temporales. Se aplicó la dinámica simbólica conjunta a las series correspondientes a las frecuencias cardiaca y respiratoria, para describir las interacciones cardiorrespiratoria de estos pacientes. Técnicas de "clustering", ecualización del histograma, clasificación mediante máquinasde soporte vectorial (SVM) y técnicas de validación permitieron seleccionar el conjunto de características más relevantes. Se propuso una nueva métrica B (índice de equilibrio) para la optimización de la clasificación con muestras desbalanceadas. Basado en este nuevo índice, aplicando SVM, se seleccionaron las mejores características que mantenían el mejor equilibrio entre sensibilidad y especificidad en todas las clasificaciones. El mejor resultado se obtuvo considerando conjuntamente la precisión y el valor de B, con una clasificación del 80% entre los grupos GE y GF, con 6 características. Clasificando GE vs. el resto de los pacientes, el mejor resultado se obtuvo con 9 características, con 81%. Clasificando GR vs. GE y GR vs. el resto de pacientes la precisión fue del 83% y 81% con 9 y 10 características, respectivamente. La tasa de mortalidad en pacientes con CHF es alta y la estratificación de estospacientes en función del riesgo es uno de los principales retos de la cardiología contemporánea. Estos pacientes a menudo desarrollan patrones de respiraciónperiódica (PB) incluyendo la respiración de Cheyne-Stokes (CSR) y respiración periódica sin apnea. La respiración periódica en estos pacientes se ha asociadocon una mayor mortalidad, especialmente en pacientes con CSR. Por lo tanto, el estudio de estos patrones respiratorios podría servir como un marcador de riesgo y proporcionar una mayor información sobre el estado fisiopatológico de pacientes con CHF. Se pretende identificar la condición de los pacientes con CHFde forma no invasiva mediante la caracterización y clasificación de patrones respiratorios con PBy respiración no periódica (nPB), y patrón de sujetos sanos, a partir registros de 15minutos de la señal de flujo respiratorio. Se caracterizó el patrón respiratorio mediante un estudio tiempo-frecuencia estacionario y no estacionario, de la envolvente de la señal de flujo respiratorio. Parámetros relacionados con la potencia espectral de la envolvente de la señal presentaron losmejores resultados en la clasificación de sujetos sanos y pacientes con CHF con CSR, PB y nPB. Las curvas ROC validan los resultados obtenidos. Se aplicó la "correntropy" para una caracterización tiempo-frecuencia mas completa del patrón respiratorio de pacientes con CHF. La "corretronpy" considera los momentos estadísticos de orden superior, siendo más robusta frente a los "outliers". Con la densidad espectral de correntropy (CSD) tanto la frecuencia de modulación como la dela respiración se representan en su posición real en el eje frecuencial. Los pacientes con PB y nPB, presentan diferentesgrados de periodicidad en función de su condición, mientras que los sujetos sanos no tienen periodicidad marcada. Con único parámetro se obtuvieron resultados del 88.9% clasificando pacientes PB vs. nPB, 95.2% para CHF vs. sanos, 94.4% para nPB vs. sanos.The main objective of this thesis is to study andcharacterize breathing patterns through the respiratory flow signal applied to patients on weaning trials from mechanicalventilation and patients with chronic heart failure (CHF). The aim is to contribute to theunderstanding of the underlying physiological processes and to help in the diagnosis of these patients. One of the most challenging problems in intensive care units is still the process ofdiscontinuing mechanical ventilation, as over 10% of patients who undergo successfulT-tube trials have to be reintubated in less than 48 hours. A failed weaning trial mayinduce cardiopulmonary distress and carries a higher mortality rate. We characterize therespiratory pattern and the dynamic interaction between heart rate and breathing rate toobtain noninvasive indices that provide enhanced information about the weaningprocess and improve the weaning outcome. This is achieved through a comparison of 94 patients with successful trials (GS), 39patients who fail to maintain spontaneous breathing (GF), and 21 patients who successfully maintain spontaneous breathing and are extubated, but require thereinstitution of mechanical ventilation in less than 48 hours because they are unable tobreathe (GR). The ECG and the respiratory flow signals used in this study were acquired during T-tube tests and last 30 minute. The respiratory pattern was characterized by means of a number of respiratory timeseries. Joint symbolic dynamics applied to time series of heart rate and respiratoryfrequency was used to describe the cardiorespiratory interactions of patients during theweaning trial process. Clustering, histogram equalization, support vector machines-based classification (SVM) and validation techniques enabled the selection of the bestsubset of input features. We defined a new optimization metric for unbalanced classification problems, andestablished a new SVM feature selection method, based on this balance index B. The proposed B-based SVM feature selection provided a better balance between sensitivityand specificity in all classifications. The best classification result was obtained with SVM feature selection based on bothaccuracy and the balance index, which classified GS and GFwith an accuracy of 80%, considering 6 features. Classifying GS versus the rest of patients, the best result wasobtained with 9 features, 81%, and the accuracy classifying GR versus GS, and GR versus the rest of the patients was 83% and 81% with 9 and 10 features, respectively.The mortality rate in CHF patients remains high and risk stratification in these patients isstill one of the major challenges of contemporary cardiology. Patients with CHF oftendevelop periodic breathing patterns including Cheyne-Stokes respiration (CSR) and periodic breathing without apnea. Periodic breathing in CHF patients is associated withincreased mortality, especially in CSR patients. Therefore it could serve as a risk markerand can provide enhanced information about thepathophysiological condition of CHF patients. The main goal of this research was to identify CHF patients' condition noninvasively bycharacterizing and classifying respiratory flow patterns from patients with PB and nPBand healthy subjects by using 15-minute long respiratory flow signals. The respiratory pattern was characterized by a stationary and a nonstationary time-frequency study through the envelope of the respiratory flow signal. Power-related parameters achieved the best results in all of the classifications involving healthy subjects and CHF patients with CSR, PB and nPB and the ROC curves validated theresults obtained for the identification of different respiratory patterns. We investigated the use of correntropy for the spectral characterization of respiratory patterns in CHF patients. The correntropy function accounts for higher-order moments and is robust to outliers. Due to the former property, the respiratory and modulationfrequencies appear at their actual locations along the frequency axis in the correntropy spectral density (CSD). The best results were achieved with correntropy and CSD-related parameters that characterized the power in the modulation and respiration discriminant bands, definedas a frequency interval centred on the modulation and respiration frequency peaks,respectively. All patients, i.e. both PB and nPB, exhibit various degrees of periodicitydepending on their condition, whereas healthy subjects have no pronounced periodicity.This fact led to excellent results classifying PB and nPB patients 88.9%, CHF versushealthy 95.2%, and nPB versus healthy 94.4% with only one parameter.Postprint (published version

    An Examination of Some Signi cant Approaches to Statistical Deconvolution

    No full text
    We examine statistical approaches to two significant areas of deconvolution - Blind Deconvolution (BD) and Robust Deconvolution (RD) for stochastic stationary signals. For BD, we review some major classical and new methods in a unified framework of nonGaussian signals. The first class of algorithms we look at falls into the class of Minimum Entropy Deconvolution (MED) algorithms. We discuss the similarities between them despite differences in origins and motivations. We give new theoretical results concerning the behaviour and generality of these algorithms and give evidence of scenarios where they may fail. In some cases, we present new modifications to the algorithms to overcome these shortfalls. Following our discussion on the MED algorithms, we next look at a recently proposed BD algorithm based on the correntropy function, a function defined as a combination of the autocorrelation and the entropy functiosn. We examine its BD performance when compared with MED algorithms. We find that the BD carried out via correntropy-matching cannot be straightforwardly interpreted as simultaneous moment-matching due to the breakdown of the correntropy expansion in terms of moments. Other issues such as maximum/minimum phase ambiguity and computational complexity suggest that careful attention is required before establishing the correntropy algorithm as a superior alternative to the existing BD techniques. For the problem of RD, we give a categorisation of different kinds of uncertainties encountered in estimation and discuss techniques required to solve each individual case. Primarily, we tackle the overlooked cases of robustification of deconvolution filters based on estimated blurring response or estimated signal spectrum. We do this by utilising existing methods derived from criteria such as minimax MSE with imposed uncertainty bands and penalised MSE. In particular, we revisit the Modified Wiener Filter (MWF) which offers simplicity and flexibility in giving improved RDs to the standard plug-in Wiener Filter (WF)

    Novas estratégias de pré-processamento, extração de atributos e classificação em sistemas BCI

    Get PDF
    Orientador: Romis Ribeiro de Faissol AttuxTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: As interfaces cérebro-computador (BCIs) visam controlar um dispositivo externo, utilizando diretamente os sinais cerebrais do usuário. Tais sistemas requerem uma série de etapas para processar e extrair atributos relevantes dos sinais observados para interpretar correta e eficientemente as intenções do usuário. Embora o campo tenha se desenvolvido continuamente e algumas dificuldades tenham sido superadas, ainda é necessário aumentar a capacidade de uso, melhorando sua capacidade de classificação e aumentando a confiabilidade de sua resposta. O objetivo clássico da pesquisa de BCI é apoiar a comunicação e o controle para usuários com comunicação prejudicada devido a doenças ou lesões. Aplicações típicas das BCI são a operação de cursores de interface, programas de escrita de texto ou dispositivos externos, como cadeiras de rodas, robôs e diferentes tipos de próteses. O usuário envia informações moduladas para a BCI, realizando tarefas mentais que produzem padrões cerebrais distintos. A BCI adquire sinais do cérebro do usuário e os traduz em comunicação adequada. Esta tese tem como objetivo desenvolver uma comunicação BCI não invasiva mais rápida e confiável baseada no estudo de diferentes técnicas que atuam nas etapas de processamento do sinal, considerando dois aspectos principais, a abordagem de aprendizado de máquina e a redução da complexidade na tarefa de aprendizado dos padrões mentais pelo usuário. A pesquisa foi focada em dois paradigmas de BCI, Imagética Motora (IM) e o potencial relacionado ao evento P300. Algoritmos de processamento de sinais para a detecção de ambos os padrões cerebrais foram aplicados e avaliados. O aspecto do pré-processamento foi a primeira perspectiva estudada, considerando como destacar a resposta dos fenômenos cerebrais, em relação ao ruído e a outras fontes de informação que talvez distorçam o sinal de EEG; isso em si é um passo que influenciará diretamente a resposta dos seguintes blocos de processamento e classificação. A Análise de Componente Independente (ICA) foi usada em conjunto com métodos de seleção de atributos e diferentes classificadores para separar as fontes originais relacionadas à dessincronização produzida pelo fenômeno de IM; esta foi uma tentativa de criar um tipo de filtro espacial que permitisse o sinal ser pré-processado, reduzindo a influência do ruído. Além disso, os resultados dos valores de classificação foram analisados considerando a comparação com métodos padrão de pré-processamento, como o filtro CAR. Os resultados mostraram que é possível separar os componentes relacionados à atividade motora. A proposta da ICA, em média, foi 4\% mais alta em porcentagem de precisão de classificação do que os resultados obtidos usando o CAR, ou quando nenhum filtro foi usado. O papel dos métodos que estudam a conectividade de diferentes áreas do cérebro foi avaliado como a segunda contribuição deste trabalho; Isso permitiu considerar aspectos que contemplam a complexidade da resposta cerebral de um usuário. A área da BCI precisa de uma interpretação mais profunda do que acontece no nível do cérebro em vários dos fenômenos estudados. A técnica utilizada para construir grafos de conectividade funcional foi a correntropia, esta medida foi utilizada para quantificar a similaridade; uma comparação foi feita usando também, as medidas de correlação de Spearman e Pearson. A conectividade funcional relaciona diferentes áreas do cérebro analisando sua atividade cerebral, de modo que o estudo do grafo foi avaliado utilizando três medidas de centralidade, onde a importância de um nó na rede é medida. Também, dois tipos de classificadores foram testados, comparando os resultados no nível de precisão de classificação. Em conclusão, a correntropia pode trazer mais informações para o estudo da conectividade do que o uso da correlação simples, o que trouxe melhorias nos resultados da classificação, especialmente quando ela foi utilizada com o classificador ELM. Finalmente, esta tese demonstra que os BCIs podem fornecer comunicação efetiva em uma aplicação onde a predição da resposta de classificação foi modelada, o que permitiu a otimização dos parâmetros do processamento de sinal realizado usando o filtro espacial xDAWN e um classificador FLDA para o problema do speller P300, buscando a melhor resposta para cada usuário. O modelo de predição utilizado foi Bayesiano e confirmou os resultados obtidos com a operação on-line do sistema, permitindo otimizar os parâmetros tanto do filtro quanto do classificador. Desta forma, foi visto que usando filtros com poucos canais de entrada, o modelo otimizado deu melhores resultados de acurácia de classificação do que os valores inicialmente obtidos ao treinar o filtro xDAWN para os mesmos casos. Os resultados obtidos mostraram que melhorias nos métodos do transdutor BCI, no pré-processamento, extração de características e classificação constituíram a base para alcançar uma comunicação BCI mais rápida e confiável. O avanço nos resultados da classificação foi obtido em todos os casos, comparado às técnicas que têm sido amplamente utilizadas e já mostraram eficácia para esse tipo de problema. No entanto, ainda há aspectos a considerar da resposta dos sujeitos para tipos específicos de paradigmas, lembrando que sua resposta pode variar ao longo de diferentes dias e as implicações reais disso na definição e no uso de diferentes métodos de processamento de sinalAbstract: Brain-computer interfaces (BCIs) aim to control an external device by directly employing user's brain signals. Such systems require a series of steps to process and extract relevant features from the observed signals to correctly and efficiently interpret the user's intentions. Although the field has been continuously developing and some difficulties have been overcome, it is still necessary to increase usability by enhancing their classification capacity and increasing the reliability of their response. The classical objective of BCI research is to support communication and control for users with impaired communication due to illness or injury. Typical BCI applications are the operation of interface cursors, spelling programs or external devices, such as wheelchairs, robots and different types of prostheses. The user sends modulated information to the BCI by engaging in mental tasks that produce distinct brain patterns. The BCI acquires signals from the user¿s brain and translates them into suitable communication. This thesis aims to develop faster and more reliable non-invasive BCI communication based on the study of different techniques that serve in the signal processing stages, considering two principal aspects, the machine learning approach, and the reduction of the complexity in the task of learning the mental patterns by the user. Research was focused on two BCI paradigms, Motor Imagery (MI) and the P300 event related potential (ERP). Signal processing algorithms for the detection of both brain patterns were applied and evaluated. The aspect of the pre-processing was the first perspective studied to consider how to highlight the response of brain phenomena, in relation to noise and other sources of information that maybe distorting the EEG signal; this in itself is a step that will directly influence the response of the following blocks of processing and classification. The Independent Component Analysis (ICA) was used in conjunction with feature selection methods and different classifiers to separate the original sources that are related to the desynchronization produced by MI phenomenon; an attempt was made to create a type of spatial filter that pre-processed the signal, reducing the influence of the noise. Furthermore, some of the classifications values were analyzed considering comparison when used other standard pre-processing methods, as the CAR filter. The results showed that it is possible to separate the components related to motor activity. The ICA proposal on average were 4\% higher in percent of classification accuracy than those obtained using CAR, or when no filter was used. The role of methods that study the connectivity of different brain areas were evaluated as the second contribution of this work; this allowed to consider aspects that contemplate the complexity of the brain response of a user. The area of BCI needs a deeper interpretation of what happens at the brain level in several of the studied phenomena. The technique used to build functional connectivity graphs was correntropy, this quantity was used to measure similarity, a comparison was made using also, the Spearman and Pearson correlation. Functional connectivity relates different brain areas activity, so the study of the graph was evaluated using three measures of centrality of graph, where the importance of a node in the network is measured. In addition, two types of classifiers were tested, comparing the results at the level of classification precision. In conclusion, the correntropy can bring more information for the study of connectivity than the use of the simple correlation, which brought improvements in the classification results especially when it was used with the ELM classifier. Finally, this thesis demonstrates that BCIs can provide effective communication in an application where the prediction of the classification response was modeled, which allowed the optimization of the parameters of the signal processing performed using the xDAWN spatial filter and a FLDA classifier for the problem of the P300 speller, seeking the best response for each user. The prediction model used was Bayesian and confirmed the results obtained with the on-line operation of the system, thus allowing to optimize the parameters of both the filter and the classifier. In this way it was seen that using filters with few inputs the optimized model gave better results of acuraccy classification than the values initially obtained when the training ofthe xDAWN filter was made for the same cases. The obtained results showed that improvements in the BCI transducer, pre-processing, feature extraction and classification methods constituted the basis to achieve faster and more reliable BCI communication. The advance in the classification results were obtained in all cases, compared to techniques that have been widely used and had already shown effectiveness for this type of problemsDoutoradoEngenharia de ComputaçãoDoutora em Engenharia Elétrica153311/2014-2CNP

    Statistical signal processing of nonstationary tensor-valued data

    Get PDF
    Real-world signals, such as the evolution of three-dimensional vector fields over time, can exhibit highly structured probabilistic interactions across their multiple constitutive dimensions. This calls for analysis tools capable of directly capturing the inherent multi-way couplings present in such data. Yet, current analyses typically employ multivariate matrix models and their associated linear algebras which are agnostic to the global data structure and can only describe local linear pairwise relationships between data entries. To address this issue, this thesis uses the property of linear separability -- a notion intrinsic to multi-dimensional data structures called tensors -- as a linchpin to consider the probabilistic, statistical and spectral separability under one umbrella. This helps to both enhance physical meaning in the analysis and reduce the dimensionality of tensor-valued problems. We first introduce a new identifiable probability distribution which appropriately models the interactions between random tensors, whereby linear relationships are considered between tensor fibres as opposed to between individual entries as in standard matrix analysis. Unlike existing models, the proposed tensor probability distribution formulation is shown to yield a unique maximum likelihood estimator which is demonstrated to be statistically efficient. Both matrices and vectors are lower-order tensors, and this gives us a unique opportunity to consider some matrix signal processing models under the more powerful framework of multilinear tensor algebra. By introducing a model for the joint distribution of multiple random tensors, it is also possible to treat random tensor regression analyses and subspace methods within a unified separability framework. Practical utility of the proposed analysis is demonstrated through case studies over synthetic and real-world tensor-valued data, including the evolution over time of global atmospheric temperatures and international interest rates. Another overarching theme in this thesis is the nonstationarity inherent to real-world signals, which typically consist of both deterministic and stochastic components. This thesis aims to help bridge the gap between formal probabilistic theory of stochastic processes and empirical signal processing methods for deterministic signals by providing a spectral model for a class of nonstationary signals, whereby the deterministic and stochastic time-domain signal properties are designated respectively by the first- and second-order moments of the signal in the frequency domain. By virtue of the assumed probabilistic model, novel tests for nonstationarity detection are devised and demonstrated to be effective in low-SNR environments. The proposed spectral analysis framework, which is intrinsically complex-valued, is facilitated by augmented complex algebra in order to fully capture the joint distribution of the real and imaginary parts of complex random variables, using a compact formulation. Finally, motivated by the need for signal processing algorithms which naturally cater for the nonstationarity inherent to real-world tensors, the above contributions are employed simultaneously to derive a general statistical signal processing framework for nonstationary tensors. This is achieved by introducing a new augmented complex multilinear algebra which allows for a concise description of the multilinear interactions between the real and imaginary parts of complex tensors. These contributions are further supported by new physically meaningful empirical results on the statistical analysis of nonstationary global atmospheric temperatures.Open Acces

    Bayesian calibration for multiple source regression model

    Get PDF
    In large variety of practical applications, using information from different sources or different kind of data is a reasonable demand. The problem of studying multiple source data can be represented as a multi-task learning problem, and then the information from one source can help to study the information from the other source by extracting a shared common structure. From the other hand, parameter evaluations obtained from various sources can be confused and conflicting. This paper proposes a Bayesian based approach to calibrate data obtained from different sources and to solve nonlinear regression problem in the presence of heteroscedastisity of the multiple-source model. An efficient algorithm is developed for implementation. Using analytical and simulation studies, it is shown that the proposed Bayesian calibration improves the convergence rate of the algorithm and precision of the model. The theoretical results are supported by a synthetic example, and a real-world problem, namely, modeling unsteady pitching moment coefficient of aircraft, for which a recurrent neural network is constructed

    Novel Computational Methods for State Space Filtering

    Get PDF
    The state-space formulation for time-dependent models has been long used invarious applications in science and engineering. While the classical Kalman filter(KF) provides optimal posterior estimation under linear Gaussian models, filteringin nonlinear and non-Gaussian environments remains challenging.Based on the Monte Carlo approximation, the classical particle filter (PF) can providemore precise estimation under nonlinear non-Gaussian models. However, it suffers fromparticle degeneracy. Drawing from optimal transport theory, the stochastic map filter(SMF) accommodates a solution to this problem, but its performance is influenced bythe limited flexibility of nonlinear map parameterisation. To account for these issues,a hybrid particle-stochastic map filter (PSMF) is first proposed in this thesis, wherethe two parts of the split likelihood are assimilated by the PF and SMF, respectively.Systematic resampling and smoothing are employed to alleviate the particle degeneracycaused by the PF. Furthermore, two PSMF variants based on the linear and nonlinearmaps (PSMF-L and PSMF-NL) are proposed, and their filtering performance is comparedwith various benchmark filters under different nonlinear non-Gaussian models.Although achieving accurate filtering results, the particle-based filters require expensive computations because of the large number of samples involved. Instead, robustKalman filters (RKFs) provide efficient solutions for the linear models with heavy-tailednoise, by adopting the recursive estimation framework of the KF. To exploit the stochasticcharacteristics of the noise, the use of heavy-tailed distributions which can fit variouspractical noises constitutes a viable solution. Hence, this thesis also introduces a novelRKF framework, RKF-SGαS, where the signal noise is assumed to be Gaussian and theheavy-tailed measurement noise is modelled by the sub-Gaussian α-stable (SGαS) distribution. The corresponding joint posterior distribution of the state vector and auxiliaryrandom variables is estimated by the variational Bayesian (VB) approach. Four differentminimum mean square error (MMSE) estimators of the scale function are presented.Besides, the RKF-SGαS is compared with the state-of-the-art RKFs under three kinds ofheavy-tailed measurement noises, and the simulation results demonstrate its estimationaccuracy and efficiency.One notable limitation of the proposed RKF-SGαS is its reliance on precise modelparameters, and substantial model errors can potentially impede its filtering performance. Therefore, this thesis also introduces a data-driven RKF method, referred to asRKFnet, which combines the conventional RKF framework with a deep learning technique. An unsupervised scheduled sampling technique (USS) is proposed to improve theistability of the training process. Furthermore, the advantages of the proposed RKFnetare quantified with respect to various traditional RKFs
    corecore