66 research outputs found

    Characterization and processing of atrial fibrillation episodes by convolutive blind source separation algorithms and nonlinear analysis of spectral features

    Full text link
    Las arritmias supraventriculares, en particular la fibrilación auricular (FA), son las enfermedades cardíacas más comúnmente encontradas en la práctica clínica rutinaria. La prevalencia de la FA es inferior al 1\% en la población menor de 60 años, pero aumenta de manera significativa a partir de los 70 años, acercándose al 10\% en los mayores de 80. El padecimiento de un episodio de FA sostenida, además de estar ligado a una mayor tasa de mortalidad, aumenta la probabilidad de sufrir tromboembolismo, infarto de miocardio y accidentes cerebrovasculares. Por otro lado, los episodios de FA paroxística, aquella que termina de manera espontánea, son los precursores de la FA sostenida, lo que suscita un alto interés entre la comunidad científica por conocer los mecanismos responsables de perpetuar o conducir a la terminación espontánea de los episodios de FA. El análisis del ECG de superficie es la técnica no invasiva más extendida en la diagnosis médica de las patologías cardíacas. Para utilizar el ECG como herramienta de estudio de la FA, se necesita separar la actividad auricular (AA) de las demás señales cardioeléctricas. En este sentido, las técnicas de Separación Ciega de Fuentes (BSS) son capaces de realizar un análisis estadístico multiderivación con el objetivo de recuperar un conjunto de fuentes cardioeléctricas independientes, entre las cuales se encuentra la AA. A la hora de abordar un problema de BSS, se hace necesario considerar un modelo de mezcla de las fuentes lo más ajustado posible a la realidad para poder desarrollar algoritmos matemáticos que lo resuelvan. Un modelo viable es aquel que supone mezclas lineales. Dentro del modelo de mezclas lineales se puede además hacer la restricción de que estas sean instantáneas. Este modelo de mezcla lineal instantánea es el utilizado en el Análisis de Componentes Independientes (ICA).Vayá Salort, C. (2010). Characterization and processing of atrial fibrillation episodes by convolutive blind source separation algorithms and nonlinear analysis of spectral features [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8416Palanci

    Large Dimensional Independent Component Analysis: Statistical Optimality and Computational Tractability

    Full text link
    In this paper, we investigate the optimal statistical performance and the impact of computational constraints for independent component analysis (ICA). Our goal is twofold. On the one hand, we characterize the precise role of dimensionality on sample complexity and statistical accuracy, and how computational consideration may affect them. In particular, we show that the optimal sample complexity is linear in dimensionality, and interestingly, the commonly used sample kurtosis-based approaches are necessarily suboptimal. However, the optimal sample complexity becomes quadratic, up to a logarithmic factor, in the dimension if we restrict ourselves to estimates that can be computed with low-degree polynomial algorithms. On the other hand, we develop computationally tractable estimates that attain both the optimal sample complexity and minimax optimal rates of convergence. We study the asymptotic properties of the proposed estimates and establish their asymptotic normality that can be readily used for statistical inferences. Our method is fairly easy to implement and numerical experiments are presented to further demonstrate its practical merits

    Intelligent data mining using artificial neural networks and genetic algorithms : techniques and applications

    Get PDF
    Data Mining (DM) refers to the analysis of observational datasets to find relationships and to summarize the data in ways that are both understandable and useful. Many DM techniques exist. Compared with other DM techniques, Intelligent Systems (ISs) based approaches, which include Artificial Neural Networks (ANNs), fuzzy set theory, approximate reasoning, and derivative-free optimization methods such as Genetic Algorithms (GAs), are tolerant of imprecision, uncertainty, partial truth, and approximation. They provide flexible information processing capability for handling real-life situations. This thesis is concerned with the ideas behind design, implementation, testing and application of a novel ISs based DM technique. The unique contribution of this thesis is in the implementation of a hybrid IS DM technique (Genetic Neural Mathematical Method, GNMM) for solving novel practical problems, the detailed description of this technique, and the illustrations of several applications solved by this novel technique. GNMM consists of three steps: (1) GA-based input variable selection, (2) Multi- Layer Perceptron (MLP) modelling, and (3) mathematical programming based rule extraction. In the first step, GAs are used to evolve an optimal set of MLP inputs. An adaptive method based on the average fitness of successive generations is used to adjust the mutation rate, and hence the exploration/exploitation balance. In addition, GNMM uses the elite group and appearance percentage to minimize the randomness associated with GAs. In the second step, MLP modelling serves as the core DM engine in performing classification/prediction tasks. An Independent Component Analysis (ICA) based weight initialization algorithm is used to determine optimal weights before the commencement of training algorithms. The Levenberg-Marquardt (LM) algorithm is used to achieve a second-order speedup compared to conventional Back-Propagation (BP) training. In the third step, mathematical programming based rule extraction is not only used to identify the premises of multivariate polynomial rules, but also to explore features from the extracted rules based on data samples associated with each rule. Therefore, the methodology can provide regression rules and features not only in the polyhedrons with data instances, but also in the polyhedrons without data instances. A total of six datasets from environmental and medical disciplines were used as case study applications. These datasets involve the prediction of longitudinal dispersion coefficient, classification of electrocorticography (ECoG)/Electroencephalogram (EEG) data, eye bacteria Multisensor Data Fusion (MDF), and diabetes classification (denoted by Data I through to Data VI). GNMM was applied to all these six datasets to explore its effectiveness, but the emphasis is different for different datasets. For example, the emphasis of Data I and II was to give a detailed illustration of how GNMM works; Data III and IV aimed to show how to deal with difficult classification problems; the aim of Data V was to illustrate the averaging effect of GNMM; and finally Data VI was concerned with the GA parameter selection and benchmarking GNMM with other IS DM techniques such as Adaptive Neuro-Fuzzy Inference System (ANFIS), Evolving Fuzzy Neural Network (EFuNN), Fuzzy ARTMAP, and Cartesian Genetic Programming (CGP). In addition, datasets obtained from published works (i.e. Data II & III) or public domains (i.e. Data VI) where previous results were present in the literature were also used to benchmark GNMM’s effectiveness. As a closely integrated system GNMM has the merit that it needs little human interaction. With some predefined parameters, such as GA’s crossover probability and the shape of ANNs’ activation functions, GNMM is able to process raw data until some human-interpretable rules being extracted. This is an important feature in terms of practice as quite often users of a DM system have little or no need to fully understand the internal components of such a system. Through case study applications, it has been shown that the GA-based variable selection stage is capable of: filtering out irrelevant and noisy variables, improving the accuracy of the model; making the ANN structure less complex and easier to understand; and reducing the computational complexity and memory requirements. Furthermore, rule extraction ensures that the MLP training results are easily understandable and transferrable

    Some statistical approaches to the analysis of matrix-valued data

    Get PDF
    In many modern applications, we encounter data sampled in the form of two-dimensional matrices. Simple vectorization of the matrix-valued observations would destroy the intrinsic row and column information embedded in such data. In this research, we study three statistical problems that are specific to matrix-valued data. The first one concerns dimension reduction for a group of high-dimensional matrix-valued data. We propose a novel dimension reduction approach that has nice approximation property, computes fast for high dimensionality, and also explicitly incorporates the intrinsic two-dimensional structure of the matrices. We discuss the connection of our proposal with existing approaches, and compare them both numerically and theoretically. We also obtain theoretical upper bounds on the approximation error of our method. The second one is a group independent component analysis approach. Motivated by analysis of groups of high-dimensional imaging data, we develop a framework in the frequency domain through Whittle log-likelihood maximization. Our method starts with efficient population value decomposition, and then models each temporally-dependent source signal via parametric linear processes. The superior performance of our approach is demonstrated through simulation studies and the ADHD200 data. The third one addresses the problem of regression with matrix-valued covariates. We consider the bilinear regression model, where two coefficient vectors are used to incorporate matrix covariates. We propose two maximum likelihood based estimators. Both estimators are shown to achieve the information lower bound and hence are theoretically optimal under the classical asymptotic framework. We further propose a bilinear ridge estimator and derive its convergence property. The superior performances of the proposed estimators are demonstrated both theoretically and numerically.Doctor of Philosoph

    Biologically inspired feature extraction for rotation and scale tolerant pattern analysis

    Get PDF
    Biologically motivated information processing has been an important area of scientific research for decades. The central topic addressed in this dissertation is utilization of lateral inhibition and more generally, linear networks with recurrent connectivity along with complex-log conformal mapping in machine based implementations of information encoding, feature extraction and pattern recognition. The reasoning behind and method for spatially uniform implementation of inhibitory/excitatory network model in the framework of non-uniform log-polar transform is presented. For the space invariant connectivity model characterized by Topelitz-Block-Toeplitz matrix, the overall network response is obtained without matrix inverse operations providing the connection matrix generating function is bound by unity. It was shown that for the network with the inter-neuron connection function expandable in a Fourier series in polar angle, the overall network response is steerable. The decorrelating/whitening characteristics of networks with lateral inhibition are used in order to develop space invariant pre-whitening kernels specialized for specific category of input signals. These filters have extremely small memory footprint and are successfully utilized in order to improve performance of adaptive neural whitening algorithms. Finally, the method for feature extraction based on localized Independent Component Analysis (ICA) transform in log-polar domain and aided by previously developed pre-whitening filters is implemented. Since output codes produced by ICA are very sparse, a small number of non-zero coefficients was sufficient to encode input data and obtain reliable pattern recognition performance

    Noise reduction and source recognition of partial discharge signals in gas-insulated substation

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Exploratory source separation in biomedical systems

    Get PDF
    Contemporary science produces vast amounts of data. The analysis of this data is in a central role for all empirical sciences as well as humanities and arts using quantitative methods. One central role of an information scientist is to provide this research with sophisticated, computationally tractable data analysis tools. When the information scientist confronts a new target field of research producing data for her to analyse, she has two options: She may make some specific hypotheses, or guesses, on the contents of the data, and test these using statistical analysis. On the other hand, she may use general purpose statistical models to get a better insight into the data before making detailed hypotheses. Latent variable models present a case of such general models. In particular, such latent variable models are discussed where the measured data is generated by some hidden sources through some mapping. The task of source separation is to recover the sources. Additionally, one may be interested in the details of the generation process itself. We argue that when little is known of the target field, independent component analysis (ICA) serves as a valuable tool to solve a problem called blind source separation (BSS). BSS means solving a source separation problem with no, or at least very little, prior information. In case more is known of the target field, it is natural to incorporate the knowledge in the separation process. Hence, we also introduce methods for this incorporation. Finally, we suggest a general framework of denoising source separation (DSS) that can serve as a basis for algorithms ranging from almost blind approach to highly specialised and problem-tuned source separation algoritms. We show that certain ICA methods can be constructed in the DSS framework. This leads to new, more robust algorithms. It is natural to use the accumulated knowledge from applying BSS in a target field to devise more detailed source separation algorithms. We call this process exploratory source separation (ESS). We show that DSS serves as a practical and flexible framework to perform ESS, too. Biomedical systems, the nervous system, heart, etc., constitute arguably the most complex systems that human beings have ever studied. Furthermore, the contemporary physics and technology have made it possible to study these systems while they operate in near-natural conditions. The usage of these sophisticated instruments has resulted in a massive explosion of available data. In this thesis, we apply the developed source separation algorithms in the analysis of the human brain, using mainly magnetoencephalograms (MEG). The methods are directly usable for electroencephalograms (EEG) and with small adjustments for other imaging modalities, such as (functional) magnetic resonance imaging (fMRI), too.reviewe

    High Performance Techniques for Face Recognition

    Get PDF
    The identification of individuals using face recognition techniques is a challenging task. This is due to the variations resulting from facial expressions, makeup, rotations, illuminations, gestures, etc. Also, facial images contain a great deal of redundant information, which negatively affects the performance of the recognition system. The dimensionality and the redundancy of the facial features have a direct effect on the face recognition accuracy. Not all the features in the feature vector space are useful. For example, non-discriminating features in the feature vector space not only degrade the recognition accuracy but also increase the computational complexity. In the field of computer vision, pattern recognition, and image processing, face recognition has become a popular research topic. This is due to its wide spread applications in security and control, which allow the identified individual to access secure areas, personal information, etc. The performance of any recognition system depends on three factors: 1) the storage requirements, 2) the computational complexity, and 3) the recognition rates. Two different recognition system families are presented and developed in this dissertation. Each family consists of several face recognition systems. Each system contains three main steps, namely, preprocessing, feature extraction, and classification. Several preprocessing steps, such as cropping, facial detection, dividing the facial image into sub-images, etc. are applied to the facial images. This reduces the effect of the irrelevant information (background) and improves the system performance. In this dissertation, either a Neural Network (NN) based classifier or Euclidean distance is used for classification purposes. Five widely used databases, namely, ORL, YALE, FERET, FEI, and LFW, each containing different facial variations, such as light condition, rotations, facial expressions, facial details, etc., are used to evaluate the proposed systems. The experimental results of the proposed systems are analyzed using K-folds Cross Validation (CV). In the family-1, Several systems are proposed for face recognition. Each system employs different integrated tools in the feature extraction step. These tools, Two Dimensional Discrete Multiwavelet Transform (2D DMWT), 2D Radon Transform (2D RT), 2D or 3D DWT, and Fast Independent Component Analysis (FastICA), are applied to the processed facial images to reduce the dimensionality and to obtain discriminating features. Each proposed system produces a unique representation, and achieves less storage requirements and better performance than the existing methods. For further facial compression, there are three face recognition systems in the second family. Each system uses different integrated tools to obtain better facial representation. The integrated tools, Vector Quantization (VQ), Discrete cosine Transform (DCT), and 2D DWT, are applied to the facial images for further facial compression and better facial representation. In the systems using the tools VQ/2D DCT and VQ/ 2D DWT, each pose in the databases is represented by one centroid with 4*4*16 dimensions. In the third system, VQ/ Facial Part Detection (FPD), each person in the databases is represented by four centroids with 4*Centroids (4*4*16) dimensions. The systems in the family-2 are proposed to further reduce the dimensions of the data compared to the systems in the family-1 while attaining comparable results. For example, in family-1, the integrated tools, FastICA/ 2D DMWT, applied to different combinations of sub-images in the FERET database with K-fold=5 (9 different poses used in the training mode), reduce the dimensions of the database by 97.22% and achieve 99% accuracy. In contrast, the integrated tools, VQ/ FPD, in the family-2 reduce the dimensions of the data by 99.31% and achieve 97.98% accuracy. In this example, the integrated tools, VQ/ FPD, accomplished further data compression and less accuracy compared to those reported by FastICA/ 2D DMWT tools. Various experiments and simulations using MATLAB are applied. The experimental results of both families confirm the improvements in the storage requirements, as well as the recognition rates as compared to some recently reported methods

    Blind Source Separation for the Processing of Contact-Less Biosignals

    Get PDF
    (Spatio-temporale) Blind Source Separation (BSS) eignet sich für die Verarbeitung von Multikanal-Messungen im Bereich der kontaktlosen Biosignalerfassung. Ziel der BSS ist dabei die Trennung von (z.B. kardialen) Nutzsignalen und Störsignalen typisch für die kontaktlosen Messtechniken. Das Potential der BSS kann praktisch nur ausgeschöpft werden, wenn (1) ein geeignetes BSS-Modell verwendet wird, welches der Komplexität der Multikanal-Messung gerecht wird und (2) die unbestimmte Permutation unter den BSS-Ausgangssignalen gelöst wird, d.h. das Nutzsignal praktisch automatisiert identifiziert werden kann. Die vorliegende Arbeit entwirft ein Framework, mit dessen Hilfe die Effizienz von BSS-Algorithmen im Kontext des kamera-basierten Photoplethysmogramms bewertet werden kann. Empfehlungen zur Auswahl bestimmter Algorithmen im Zusammenhang mit spezifischen Signal-Charakteristiken werden abgeleitet. Außerdem werden im Rahmen der Arbeit Konzepte für die automatisierte Kanalauswahl nach BSS im Bereich der kontaktlosen Messung des Elektrokardiogramms entwickelt und bewertet. Neuartige Algorithmen basierend auf Sparse Coding erwiesen sich dabei als besonders effizient im Vergleich zu Standard-Methoden.(Spatio-temporal) Blind Source Separation (BSS) provides a large potential to process distorted multichannel biosignal measurements in the context of novel contact-less recording techniques for separating distortions from the cardiac signal of interest. This potential can only be practically utilized (1) if a BSS model is applied that matches the complexity of the measurement, i.e. the signal mixture and (2) if permutation indeterminacy is solved among the BSS output components, i.e the component of interest can be practically selected. The present work, first, designs a framework to assess the efficacy of BSS algorithms in the context of the camera-based photoplethysmogram (cbPPG) and characterizes multiple BSS algorithms, accordingly. Algorithm selection recommendations for certain mixture characteristics are derived. Second, the present work develops and evaluates concepts to solve permutation indeterminacy for BSS outputs of contact-less electrocardiogram (ECG) recordings. The novel approach based on sparse coding is shown to outperform the existing concepts of higher order moments and frequency-domain features

    Synchrony, metastability, dynamic integration, and competition in the spontaneous functional connectivity of the human brain

    Get PDF
    Available online 3 June 2019.The human brain is functionally organized into large-scale neural networks that are dynamically interconnected. Multiple short-lived states of resting-state functional connectivity (rsFC) identified transiently synchronized networks and cross-network integration. However, little is known about the way brain couplings covary as rsFC states wax and wane. In this magnetoencephalography study, we explore the synchronization structure among the spontaneous interactions of well-known resting-state networks (RSNs). To do so, we extracted modes of dynamic coupling that reflect rsFC synchrony and analyzed their spatio-temporal features. These modes identified transient, sporadic rsFC changes characterized by the widespread integration of RSNs across the brain, most prominently in the β band. This is in line with the metastable rsFC state model of resting-state dynamics, wherein our modes fit as state transition processes. Furthermore, the default-mode network (DMN) stood out as being structured into competitive cross-network couplings with widespread DMN-RSN interactions, especially among the β-band modes. These results substantiate the theory that the DMN is a core network enabling dynamic global brain integration in the β band.This work was supported by the Action de Recherche Concert ee (ARC Consolidation 2015–2019, “Characterization of the electrophysiological bases, the temporal dynamics and the functional relevance of resting state network” attributed to X.D.T.) and by the research convention “Les Voies du Savoir” (Fonds Erasme, Brussels, Belgium). M.B. benefited from the program Attract of Innoviris (grant 2015-BB2B-10), the Spanish Ministry of Economy and Competitiveness (grant PSI2016-77175-P), and theMarie Skłodowska-Curie Action of the European Commission (grant 743562). M.V.G. and G.N.were supported by the Fonds Erasme. N.C. benefited from a research grant from the ARC Consolidation (2014–2017, “Characterization of the electrophysiological bases, the temporal dynamics and the functional relevance of resting state network” attributed to X.D.T.) and from the Fonds Erasme (research convention “Les Voies du Savoir”). X.D.T. is Post-doctorate Clinical Master Specialist at the Fonds de la Recherche Scientifique (F.R.S.-FNRS, Brussels, Belgium). The MEG project at the CUB – H^opital Erasme is financially supported by the Fonds Erasme (research convention “Les Voies du Savoir”)
    corecore