772 research outputs found

    Convex Cauchy Schwarz Independent Component Analysis for Blind Source Separation

    Full text link
    We present a new high performance Convex Cauchy Schwarz Divergence (CCS DIV) measure for Independent Component Analysis (ICA) and Blind Source Separation (BSS). The CCS DIV measure is developed by integrating convex functions into the Cauchy Schwarz inequality. By including a convexity quality parameter, the measure has a broad control range of its convexity curvature. With this measure, a new CCS ICA algorithm is structured and a non parametric form is developed incorporating the Parzen window based distribution. Furthermore, pairwise iterative schemes are employed to tackle the high dimensional problem in BSS. We present two schemes of pairwise non parametric ICA algorithms, one is based on gradient decent and the second on the Jacobi Iterative method. Several case study scenarios are carried out on noise free and noisy mixtures of speech and music signals. Finally, the superiority of the proposed CCS ICA algorithm is demonstrated in metric comparison performance with FastICA, RobustICA, convex ICA (C ICA), and other leading existing algorithms.Comment: 13 page

    Bayesian astrostatistics: a backward look to the future

    Full text link
    This perspective chapter briefly surveys: (1) past growth in the use of Bayesian methods in astrophysics; (2) current misconceptions about both frequentist and Bayesian statistical inference that hinder wider adoption of Bayesian methods by astronomers; and (3) multilevel (hierarchical) Bayesian modeling as a major future direction for research in Bayesian astrostatistics, exemplified in part by presentations at the first ISI invited session on astrostatistics, commemorated in this volume. It closes with an intentionally provocative recommendation for astronomical survey data reporting, motivated by the multilevel Bayesian perspective on modeling cosmic populations: that astronomers cease producing catalogs of estimated fluxes and other source properties from surveys. Instead, summaries of likelihood functions (or marginal likelihood functions) for source properties should be reported (not posterior probability density functions), including nontrivial summaries (not simply upper limits) for candidate objects that do not pass traditional detection thresholds.Comment: 27 pp, 4 figures. A lightly revised version of a chapter in "Astrostatistical Challenges for the New Astronomy" (Joseph M. Hilbe, ed., Springer, New York, forthcoming in 2012), the inaugural volume for the Springer Series in Astrostatistics. Version 2 has minor clarifications and an additional referenc

    Connecting the Dots: Identifying Network Structure via Graph Signal Processing

    Full text link
    Network topology inference is a prominent problem in Network Science. Most graph signal processing (GSP) efforts to date assume that the underlying network is known, and then analyze how the graph's algebraic and spectral characteristics impact the properties of the graph signals of interest. Such an assumption is often untenable beyond applications dealing with e.g., directly observable social and infrastructure networks; and typically adopted graph construction schemes are largely informal, distinctly lacking an element of validation. This tutorial offers an overview of graph learning methods developed to bridge the aforementioned gap, by using information available from graph signals to infer the underlying graph topology. Fairly mature statistical approaches are surveyed first, where correlation analysis takes center stage along with its connections to covariance selection and high-dimensional regression for learning Gaussian graphical models. Recent GSP-based network inference frameworks are also described, which postulate that the network exists as a latent underlying structure, and that observations are generated as a result of a network process defined in such a graph. A number of arguably more nascent topics are also briefly outlined, including inference of dynamic networks, nonlinear models of pairwise interaction, as well as extensions to directed graphs and their relation to causal inference. All in all, this paper introduces readers to challenges and opportunities for signal processing research in emerging topic areas at the crossroads of modeling, prediction, and control of complex behavior arising in networked systems that evolve over time

    An array-based receiver function deconvolution method: methodology and application

    Get PDF
    Receiver functions (RFs) estimated on dense arrays have been widely used for the study of Earth structures across multiple scales. However, due to the ill-posedness of deconvolution, RF estimation faces challenges such as non-uniqueness and data overfitting. In this paper, we present an array-based RF deconvolution method in the context of emerging dense arrays. We propose to exploit the wavefield coherency along a dense array by joint inversions of waveforms from multiple events and stations for RFs with a minimum number of phases required by data. The new method can effectively reduce the instability of deconvolution and help retrieve RFs with higher fidelity. We test the algorithm on synthetic waveforms and show that it produces RFs with higher interpretability than those by the conventional RF estimation practice. Then we apply the method to real data from the 2016 Incorporated Research Institutions for Seismology (IRIS) community wavefield experiment in Oklahoma and are able to generate high-resolution RF profiles with only three teleseismic earthquakes recorded by the temporary deployment. This new method should help enhance RF images derived from short-term high-density seismic profiles

    An array-based receiver function deconvolution method: methodology and application

    Get PDF
    Receiver functions (RFs) estimated on dense arrays have been widely used for the study of Earth structures across multiple scales. However, due to the ill-posedness of deconvolution, RF estimation faces challenges such as non-uniqueness and data overfitting. In this paper, we present an array-based RF deconvolution method in the context of emerging dense arrays. We propose to exploit the wavefield coherency along a dense array by joint inversions of waveforms from multiple events and stations for RFs with a minimum number of phases required by data. The new method can effectively reduce the instability of deconvolution and help retrieve RFs with higher fidelity. We test the algorithm on synthetic waveforms and show that it produces RFs with higher interpretability than those by the conventional RF estimation practice. Then we apply the method to real data from the 2016 Incorporated Research Institutions for Seismology (IRIS) community wavefield experiment in Oklahoma and are able to generate high-resolution RF profiles with only three teleseismic earthquakes recorded by the temporary deployment. This new method should help enhance RF images derived from short-term high-density seismic profiles

    A stochastic algorithm for probabilistic independent component analysis

    Full text link
    The decomposition of a sample of images on a relevant subspace is a recurrent problem in many different fields from Computer Vision to medical image analysis. We propose in this paper a new learning principle and implementation of the generative decomposition model generally known as noisy ICA (for independent component analysis) based on the SAEM algorithm, which is a versatile stochastic approximation of the standard EM algorithm. We demonstrate the applicability of the method on a large range of decomposition models and illustrate the developments with experimental results on various data sets.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS499 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Improving Multiple Surface Range Estimation of a 3-Dimensional FLASH LADAR in the Presence of Atmospheric Turbulence

    Get PDF
    Laser Radar sensors can be designed to provide two-dimensional and three-dimensional (3-D) images of a scene from a single laser pulse. Currently, there are various data recording and presentation techniques being developed for 3-D sensors. While the technology is still being proven, many applications are being explored and suggested. As technological advancements are coupled with enhanced signal processing algorithms, it is possible that this technology will present exciting new military capabilities for sensor users. The goal of this work is to develop an algorithm to enhance the utility of 3-D Laser Radar sensors through accurate ranging to multiple surfaces per image pixel while minimizing the effects of diffraction. Via a new 3-D blind deconvolution algorithm, it will be possible to realize numerous enhancements over both traditional Gaussian mixture modeling and single surface range estimation. While traditional Gaussian mixture modeling can effectively model the received pulse, we know that its shape is likely altered due to optical aberrations from the imaging system and the medium through which it is imaging. Simulation examples show that the multi-surface ranging algorithm derived in this work improves range estimation over standard Gaussian mixture modeling and frame-by-frame deconvolution by up to 89% and 85% respectively

    A review of earth-viewing methods for in-flight assessment of modulation transfer function and noise of optical spaceborne sensors

    No full text
    Several earth observation satellites bear optical imaging sensors whose outputs are essential in many environmental aspects. This paper focuses on two parameters of the quality of the imaging system: the Modulation Transfer Function (MTF) and Signal to Noise Ratio (SNR). These two parameters evolve in time and should be periodically monitored in-flight to control the quality of delivered images and possibly mitigate defaults. Only a very limited number of past and current sensors have an on-board calibration device fully appropriate to the assessment of the noise and none of them has capabilities for MTF assessment. Most often, vicarious techniques should be employed which are based on the Earth-viewing approach: an image, or a combination of images, is selected because the landscape offers certain properties, e.g., well-marked contrast or on the contrary, spatial homogeneity, whose knowledge or modeling permit the assessment of these parameters. Several methods have been proposed to perform in-flight assessments. This paper proposes a review of the principles and techniques employed in this domain

    Toward single particle reconstruction without particle picking: Breaking the detection limit

    Full text link
    Single-particle cryo-electron microscopy (cryo-EM) has recently joined X-ray crystallography and NMR spectroscopy as a high-resolution structural method for biological macromolecules. In a cryo-EM experiment, the microscope produces images called micrographs. Projections of the molecule of interest are embedded in the micrographs at unknown locations, and under unknown viewing directions. Standard imaging techniques first locate these projections (detection) and then reconstruct the 3-D structure from them. Unfortunately, high noise levels hinder detection. When reliable detection is rendered impossible, the standard techniques fail. This is a problem especially for small molecules, which can be particularly hard to detect. In this paper, we propose a radically different approach: we contend that the structure could, in principle, be reconstructed directly from the micrographs, without intermediate detection. As a result, even small molecules should be within reach for cryo-EM. To support this claim, we setup a simplified mathematical model and demonstrate how our autocorrelation analysis technique allows to go directly from the micrographs to the sought signals. This involves only one pass over the micrographs, which is desirable for large experiments. We show numerical results and discuss challenges that lay ahead to turn this proof-of-concept into a competitive alternative to state-of-the-art algorithms

    Gradient Algorithms for Complex Non-Gaussian Independent Component/Vector Extraction, Question of Convergence

    Full text link
    We revise the problem of extracting one independent component from an instantaneous linear mixture of signals. The mixing matrix is parameterized by two vectors, one column of the mixing matrix and one row of the de-mixing matrix. The separation is based on the non-Gaussianity of the source of interest, while the other background signals are assumed to be Gaussian. Three gradient-based estimation algorithms are derived using the maximum likelihood principle and are compared with the Natural Gradient algorithm for Independent Component Analysis and with One-unit FastICA based on negentropy maximization. The ideas and algorithms are also generalized for the extraction of a vector component when the extraction proceeds jointly from a set of instantaneous mixtures. Throughout the paper, we address the problem of the size of the region of convergence for which the algorithms guarantee the extraction of the desired source. We show how that size is influenced by the ratio of powers of the sources within the mixture. Simulations confirm this observation where several algorithms are compared. They show various convergence behavior in a scenario where the source of interest is dominant or weak. Here, our proposed modifications of the gradient methods taking into account the dominance/weakness of the source show improved global convergence property
    corecore