2,471 research outputs found
An Extension of Slow Feature Analysis for Nonlinear Blind Source Separation
We present and test an extension of slow feature analysis as a novel approach to nonlinear blind source separation. The algorithm relies on temporal correlations and iteratively reconstructs a set of statistically independent sources from arbitrary nonlinear instantaneous mixtures. Simulations show that it is able to invert a complicated nonlinear mixture of two audio signals with a reliability of more than \%. The algorithm is based on a mathematical analysis of slow feature analysis for the case of input data that are generated from statistically independent sources
Separating a Real-Life Nonlinear Image Mixture
When acquiring an image of a paper document, the image printed on the back page sometimes shows through. The mixture of the front- and back-page images thus obtained is markedly nonlinear, and thus constitutes a good real-life test case for nonlinear blind source separation.
This paper addresses a difficult version of this problem, corresponding to the use of "onion skin" paper, which results in a relatively strong nonlinearity of the mixture, which becomes close to singular in the lighter regions of the images. The separation is achieved through the MISEP technique, which is an extension of the well known INFOMAX method. The separation results are assessed with objective quality measures. They show an improvement over the results obtained with linear separation, but have room for further improvement
Performing Nonlinear Blind Source Separation with Signal Invariants
Given a time series of multicomponent measurements x(t), the usual objective
of nonlinear blind source separation (BSS) is to find a "source" time series
s(t), comprised of statistically independent combinations of the measured
components. In this paper, the source time series is required to have a density
function in (s,ds/dt)-space that is equal to the product of density functions
of individual components. This formulation of the BSS problem has a solution
that is unique, up to permutations and component-wise transformations.
Separability is shown to impose constraints on certain locally invariant
(scalar) functions of x, which are derived from local higher-order correlations
of the data's velocity dx/dt. The data are separable if and only if they
satisfy these constraints, and, if the constraints are satisfied, the sources
can be explicitly constructed from the data. The method is illustrated by using
it to separate two speech-like sounds recorded with a single microphone.Comment: 8 pages, 3 figure
Binary Independent Component Analysis with OR Mixtures
Independent component analysis (ICA) is a computational method for separating
a multivariate signal into subcomponents assuming the mutual statistical
independence of the non-Gaussian source signals. The classical Independent
Components Analysis (ICA) framework usually assumes linear combinations of
independent sources over the field of realvalued numbers R. In this paper, we
investigate binary ICA for OR mixtures (bICA), which can find applications in
many domains including medical diagnosis, multi-cluster assignment, Internet
tomography and network resource management. We prove that bICA is uniquely
identifiable under the disjunctive generation model, and propose a
deterministic iterative algorithm to determine the distribution of the latent
random variables and the mixing matrix. The inverse problem concerning
inferring the values of latent variables are also considered along with noisy
measurements. We conduct an extensive simulation study to verify the
effectiveness of the propose algorithm and present examples of real-world
applications where bICA can be applied.Comment: Manuscript submitted to IEEE Transactions on Signal Processin
A convolutional neural network based deep learning methodology for recognition of partial discharge patterns from high voltage cables
It is a great challenge to differentiate partial discharge (PD) induced by different types of insulation defects in high-voltage cables. Some types of PD signals have very similar characteristics and are specifically difficult to differentiate, even for the most experienced specialists. To overcome the challenge, a convolutional neural network (CNN)-based deep learning methodology for PD pattern recognition is presented in this paper. First, PD testing for five types of artificial defects in ethylene-propylene-rubber cables is carried out in high voltage laboratory to generate signals containing PD data. Second, 3500 sets of PD transient pulses are extracted, and then 33 kinds of PD features are established. The third stage applies a CNN to the data; typical CNN architecture and the key factors which affect the CNN-based pattern recognition accuracy are described. Factors discussed include the number of the network layers, convolutional kernel size, activation function, and pooling method. This paper presents a flowchart of the CNN-based PD pattern recognition method and an evaluation with 3500 sets of PD samples. Finally, the CNN-based pattern recognition results are shown and the proposed method is compared with two more traditional analysis methods, i.e., support vector machine (SVM) and back propagation neural network (BPNN). The results show that the proposed CNN method has higher pattern recognition accuracy than SVM and BPNN, and that the novel method is especially effective for PD type recognition in cases of signals of high similarity, which is applicable for industrial applications
A Sparsity-Based Method for Blind Compensation of a Memoryless Nonlinear Distortion: Application to Ion-Selective Electrodes
International audience— In this paper, we propose a method for blind compensation of a memoryless nonlinear distortion. We assume as prior information that the desired signal admits a sparse representation in a transformed domain that should be known in advance. Then, given that a nonlinear distortion tends to generate signals that are less sparse than the desired one, our proposal is to build a compensating function model that gives rise to a maximally sparse signal. The implementation of this proposal has, as central elements, a criterion built upon an approximation of the 0-norm, the use of polynomial functions as compensating structures, and an optimization strategy based on sequential quadratic programming. We provide a theoretic analysis for an 0-norm criterion and results considering synthetic data. We also employ the method in an actual application related to chemical analysis via ion-selective electrode arrays
Acoustic emission signal processing framework to identify fracture in aluminum alloys
Acoustic emission (AE) is a common nondestructive evaluation tool that has been used to monitor fracture in materials and structures. The direct connection between AE events and their source, however, is difficult because of material, geometry and sensor contributions to the recorded signals. Moreover, the recorded AE activity is affected by several noise sources which further complicate the identification process. This article uses a combination of in situ experiments inside the scanning electron microscope to observe fracture in an aluminum alloy at the time and scale it occurs and a novel AE signal processing framework to identify characteristics that correlate with fracture events. Specifically, a signal processing method is designed to cluster AE activity based on the selection of a subset of features objectively identified by examining their correlation and variance. The identified clusters are then compared to both mechanical and in situ observed
microstructural damage. Results from a set of nanoindentation tests as well as a carefully designed computational model are also presented to validate the conclusions drawn from signal processing
Blind Source Separation for the Processing of Contact-Less Biosignals
(Spatio-temporale) Blind Source Separation (BSS) eignet sich für die Verarbeitung von Multikanal-Messungen im Bereich der kontaktlosen Biosignalerfassung. Ziel der BSS ist dabei die Trennung von (z.B. kardialen) Nutzsignalen und Störsignalen typisch für die kontaktlosen Messtechniken. Das Potential der BSS kann praktisch nur ausgeschöpft werden, wenn (1) ein geeignetes BSS-Modell verwendet wird, welches der Komplexität der Multikanal-Messung gerecht wird und (2) die unbestimmte Permutation unter den BSS-Ausgangssignalen gelöst wird, d.h. das Nutzsignal praktisch automatisiert identifiziert werden kann. Die vorliegende Arbeit entwirft ein Framework, mit dessen Hilfe die Effizienz von BSS-Algorithmen im Kontext des kamera-basierten Photoplethysmogramms bewertet werden kann. Empfehlungen zur Auswahl bestimmter Algorithmen im Zusammenhang mit spezifischen Signal-Charakteristiken werden abgeleitet. Außerdem werden im Rahmen der Arbeit Konzepte für die automatisierte Kanalauswahl nach BSS im Bereich der kontaktlosen Messung des Elektrokardiogramms entwickelt und bewertet. Neuartige Algorithmen basierend auf Sparse Coding erwiesen sich dabei als besonders effizient im Vergleich zu Standard-Methoden.(Spatio-temporal) Blind Source Separation (BSS) provides a large potential to process distorted multichannel biosignal measurements in the context of novel contact-less recording techniques for separating distortions from the cardiac signal of interest. This potential can only be practically utilized (1) if a BSS model is applied that matches the complexity of the measurement, i.e. the signal mixture and (2) if permutation indeterminacy is solved among the BSS output components, i.e the component of interest can be practically selected. The present work, first, designs a framework to assess the efficacy of BSS algorithms in the context of the camera-based photoplethysmogram (cbPPG) and characterizes multiple BSS algorithms, accordingly. Algorithm selection recommendations for certain mixture characteristics are derived. Second, the present work develops and evaluates concepts to solve permutation indeterminacy for BSS outputs of contact-less electrocardiogram (ECG) recordings. The novel approach based on sparse coding is shown to outperform the existing concepts of higher order moments and frequency-domain features
- …