81,275 research outputs found

    New Negentropy Optimization Schemes for Blind Signal Extraction of Complex Valued Sources

    Get PDF
    Blind signal extraction, a hot issue in the field of communication signal processing, aims to retrieve the sources through the optimization of contrast functions. Many contrasts based on higher-order statistics such as kurtosis, usually behave sensitive to outliers. Thus, to achieve robust results, nonlinear functions are utilized as contrasts to approximate the negentropy criterion, which is also a classical metric for non-Gaussianity. However, existing methods generally have a high computational cost, hence leading us to address the problem of efficient optimization of contrast function. More precisely, we design a novel “reference-based” contrast function based on negentropy approximations, and then propose a new family of algorithms (Alg.1 and Alg.2) to maximize it. Simulations confirm the convergence of our method to a separating solution, which is also analyzed in theory. We also validate the theoretic complexity analysis that Alg.2 has a much lower computational cost than Alg.1 and existing optimization methods based on negentropy criterion. Finally, experiments for the separation of single sideband signals illustrate that our method has good prospects in real-world applications

    Of `Cocktail Parties' and Exoplanets

    Full text link
    The characterisation of ever smaller and fainter extrasolar planets requires an intricate understanding of one's data and the analysis techniques used. Correcting the raw data at the 10^-4 level of accuracy in flux is one of the central challenges. This can be difficult for instruments that do not feature a calibration plan for such high precision measurements. Here, it is not always obvious how to de-correlate the data using auxiliary information of the instrument and it becomes paramount to know how well one can disentangle instrument systematics from one's data, given nothing but the data itself. We propose a non-parametric machine learning algorithm, based on the concept of independent component analysis, to de-convolve the systematic noise and all non-Gaussian signals from the desired astrophysical signal. Such a `blind' signal de-mixing is commonly known as the `Cocktail Party problem' in signal-processing. Given multiple simultaneous observations of the same exoplanetary eclipse, as in the case of spectrophotometry, we show that we can often disentangle systematic noise from the original light curve signal without the use of any complementary information of the instrument. In this paper, we explore these signal extraction techniques using simulated data and two data sets observed with the Hubble-NICMOS instrument. Another important application is the de-correlation of the exoplanetary signal from time-correlated stellar variability. Using data obtained by the Kepler mission we show that the desired signal can be de-convolved from the stellar noise using a single time series spanning several eclipse events. Such non-parametric techniques can provide important confirmations of the existent parametric corrections reported in the literature, and their associated results. Additionally they can substantially improve the precision exoplanetary light curve analysis in the future.Comment: ApJ accepte

    Overlearning in marginal distribution-based ICA: analysis and solutions

    Get PDF
    The present paper is written as a word of caution, with users of independent component analysis (ICA) in mind, to overlearning phenomena that are often observed.\\ We consider two types of overlearning, typical to high-order statistics based ICA. These algorithms can be seen to maximise the negentropy of the source estimates. The first kind of overlearning results in the generation of spike-like signals, if there are not enough samples in the data or there is a considerable amount of noise present. It is argued that, if the data has power spectrum characterised by 1/f1/f curve, we face a more severe problem, which cannot be solved inside the strict ICA model. This overlearning is better characterised by bumps instead of spikes. Both overlearning types are demonstrated in the case of artificial signals as well as magnetoencephalograms (MEG). Several methods are suggested to circumvent both types, either by making the estimation of the ICA model more robust or by including further modelling of the data

    From neural PCA to deep unsupervised learning

    Full text link
    A network supporting deep unsupervised learning is presented. The network is an autoencoder with lateral shortcut connections from the encoder to decoder at each level of the hierarchy. The lateral shortcut connections allow the higher levels of the hierarchy to focus on abstract invariant features. While standard autoencoders are analogous to latent variable models with a single layer of stochastic variables, the proposed network is analogous to hierarchical latent variables models. Learning combines denoising autoencoder and denoising sources separation frameworks. Each layer of the network contributes to the cost function a term which measures the distance of the representations produced by the encoder and the decoder. Since training signals originate from all levels of the network, all layers can learn efficiently even in deep networks. The speedup offered by cost terms from higher levels of the hierarchy and the ability to learn invariant features are demonstrated in experiments.Comment: A revised version of an article that has been accepted for publication in Advances in Independent Component Analysis and Learning Machines (2015), edited by Ella Bingham, Samuel Kaski, Jorma Laaksonen and Jouko Lampine

    Blind image separation based on exponentiated transmuted Weibull distribution

    Full text link
    In recent years the processing of blind image separation has been investigated. As a result, a number of feature extraction algorithms for direct application of such image structures have been developed. For example, separation of mixed fingerprints found in any crime scene, in which a mixture of two or more fingerprints may be obtained, for identification, we have to separate them. In this paper, we have proposed a new technique for separating a multiple mixed images based on exponentiated transmuted Weibull distribution. To adaptively estimate the parameters of such score functions, an efficient method based on maximum likelihood and genetic algorithm will be used. We also calculate the accuracy of this proposed distribution and compare the algorithmic performance using the efficient approach with other previous generalized distributions. We find from the numerical results that the proposed distribution has flexibility and an efficient resultComment: 14 pages, 12 figures, 4 tables. International Journal of Computer Science and Information Security (IJCSIS),Vol. 14, No. 3, March 2016 (pp. 423-433
    • …
    corecore