14,160 research outputs found

    Echo Cancellation - A Likelihood Ratio Test for Double-talk Versus Channel Change

    Get PDF
    Echo cancellers are in wide use in both electrical (four wire to two wire mismatch) and acoustic (speaker-microphone coupling) applications. One of the main design problems is the control logic for adaptation. Basically, the algorithm weights should be frozen in the presence of double-talk and adapt quickly in the absence of double-talk. The control logic can be quite complicated since it is often not easy to discriminate between the echo signal and the near-end speaker. This paper derives a log likelihood ratio test (LRT) for deciding between double-talk (freeze weights) and a channel change (adapt quickly) using a stationary Gaussian stochastic input signal model. The probability density function of a sufficient statistic under each hypothesis is obtained and the performance of the test is evaluated as a function of the system parameters. The receiver operating characteristics (ROCs) indicate that it is difficult to correctly decide between double-talk and a channel change based upon a single look. However, post-detection integration of approximately one hundred sufficient statistic samples yields a detection probability close to unity (0.99) with a small false alarm probability (0.01)

    Detection of curved lines with B-COSFIRE filters: A case study on crack delineation

    Full text link
    The detection of curvilinear structures is an important step for various computer vision applications, ranging from medical image analysis for segmentation of blood vessels, to remote sensing for the identification of roads and rivers, and to biometrics and robotics, among others. %The visual system of the brain has remarkable abilities to detect curvilinear structures in noisy images. This is a nontrivial task especially for the detection of thin or incomplete curvilinear structures surrounded with noise. We propose a general purpose curvilinear structure detector that uses the brain-inspired trainable B-COSFIRE filters. It consists of four main steps, namely nonlinear filtering with B-COSFIRE, thinning with non-maximum suppression, hysteresis thresholding and morphological closing. We demonstrate its effectiveness on a data set of noisy images with cracked pavements, where we achieve state-of-the-art results (F-measure=0.865). The proposed method can be employed in any computer vision methodology that requires the delineation of curvilinear and elongated structures.Comment: Accepted at Computer Analysis of Images and Patterns (CAIP) 201

    A learning approach to the detection of gravitational wave transients

    Get PDF
    We investigate the class of quadratic detectors (i.e., the statistic is a bilinear function of the data) for the detection of poorly modeled gravitational transients of short duration. We point out that all such detection methods are equivalent to passing the signal through a filter bank and linearly combine the output energy. Existing methods for the choice of the filter bank and of the weight parameters rely essentially on the two following ideas: (i) the use of the likelihood function based on a (possibly non-informative) statistical model of the signal and the noise, (ii) the use of Monte-Carlo simulations for the tuning of parametric filters to get the best detection probability keeping fixed the false alarm rate. We propose a third approach according to which the filter bank is "learned" from a set of training data. By-products of this viewpoint are that, contrarily to previous methods, (i) there is no requirement of an explicit description of the probability density function of the data when the signal is present and (ii) the filters we use are non-parametric. The learning procedure may be described as a two step process: first, estimate the mean and covariance of the signal with the training data; second, find the filters which maximize a contrast criterion referred to as deflection between the "noise only" and "signal+noise" hypothesis. The deflection is homogeneous to the signal-to-noise ratio and it uses the quantities estimated at the first step. We apply this original method to the problem of the detection of supernovae core collapses. We use the catalog of waveforms provided recently by Dimmelmeier et al. to train our algorithm. We expect such detector to have better performances on this particular problem provided that the reference signals are reliable.Comment: 22 pages, 4 figure

    Ensemble Transport Adaptive Importance Sampling

    Full text link
    Markov chain Monte Carlo methods are a powerful and commonly used family of numerical methods for sampling from complex probability distributions. As applications of these methods increase in size and complexity, the need for efficient methods increases. In this paper, we present a particle ensemble algorithm. At each iteration, an importance sampling proposal distribution is formed using an ensemble of particles. A stratified sample is taken from this distribution and weighted under the posterior, a state-of-the-art ensemble transport resampling method is then used to create an evenly weighted sample ready for the next iteration. We demonstrate that this ensemble transport adaptive importance sampling (ETAIS) method outperforms MCMC methods with equivalent proposal distributions for low dimensional problems, and in fact shows better than linear improvements in convergence rates with respect to the number of ensemble members. We also introduce a new resampling strategy, multinomial transformation (MT), which while not as accurate as the ensemble transport resampler, is substantially less costly for large ensemble sizes, and can then be used in conjunction with ETAIS for complex problems. We also focus on how algorithmic parameters regarding the mixture proposal can be quickly tuned to optimise performance. In particular, we demonstrate this methodology's superior sampling for multimodal problems, such as those arising from inference for mixture models, and for problems with expensive likelihoods requiring the solution of a differential equation, for which speed-ups of orders of magnitude are demonstrated. Likelihood evaluations of the ensemble could be computed in a distributed manner, suggesting that this methodology is a good candidate for parallel Bayesian computations

    Diverse Structural Evolution at z > 1 in Cosmologically Simulated Galaxies

    Full text link
    From mock Hubble Space Telescope images, we quantify non-parametric statistics of galaxy morphology, thereby predicting the emergence of relationships among stellar mass, star formation, and observed rest-frame optical structure at 1 < z < 3. We measure automated diagnostics of galaxy morphology in cosmological simulations of the formation of 22 central galaxies with 9.3 < log10 M_*/M_sun < 10.7. These high-spatial-resolution zoom-in calculations enable accurate modeling of the rest-frame UV and optical morphology. Even with small numbers of galaxies, we find that structural evolution is neither universal nor monotonic: galaxy interactions can trigger either bulge or disc formation, and optically bulge-dominated galaxies at this mass may not remain so forever. Simulated galaxies with M_* > 10^10 M_sun contain relatively more disc-dominated light profiles than those with lower mass, reflecting significant disc brightening in some haloes at 1 < z < 2. By this epoch, simulated galaxies with specific star formation rates below 10^-9.7 yr^-1 are more likely than normal star-formers to have a broader mix of structural types, especially at M_* > 10^10 M_sun. We analyze a cosmological major merger at z ~ 1.5 and find that the newly proposed MID morphology diagnostics trace later merger stages while G-M20 trace earlier ones. MID is sensitive also to clumpy star-forming discs. The observability time of typical MID-enhanced events in our simulation sample is less than 100 Myr. A larger sample of cosmological assembly histories may be required to calibrate such diagnostics in the face of their sensitivity to viewing angle, segmentation algorithm, and various phenomena such as clumpy star formation and minor mergers.Comment: 23 pages, 16 figures, MNRAS accepted versio

    Systematic approach to nonlinear filtering associated with aggregation operators. Part 2. Frechet MIMO-filters

    Get PDF
    Median filtering has been widely used in scalar-valued image processing as an edge preserving operation. The basic idea is that the pixel value is replaced by the median of the pixels contained in a window around it. In this work, this idea is extended onto vector-valued images. It is based on the fact that the median is also the value that minimizes the sum of distances between all grey-level pixels in the window. The Frechet median of a discrete set of vector-valued pixels in a metric space with a metric is the point minimizing the sum of metric distances to the all sample pixels. In this paper, we extend the notion of the Frechet median to the general Frechet median, which minimizes the Frechet cost function (FCF) in the form of aggregation function of metric distances, instead of the ordinary sum. Moreover, we propose use an aggregation distance instead of classical metric distance. We use generalized Frechet median for constructing new nonlinear Frechet MIMO-filters for multispectral image processing. (C) 2017 The Authors. Published by Elsevier Ltd.This work was supported by grants the RFBR No 17-07-00886, No 17-29-03369 and by Ural State Forest University Engineering's Center of Excellence in "Quantum and Classical Information Technologies for Remote Sensing Systems"

    Kalman-Takens filtering in the presence of dynamical noise

    Full text link
    The use of data assimilation for the merging of observed data with dynamical models is becoming standard in modern physics. If a parametric model is known, methods such as Kalman filtering have been developed for this purpose. If no model is known, a hybrid Kalman-Takens method has been recently introduced, in order to exploit the advantages of optimal filtering in a nonparametric setting. This procedure replaces the parametric model with dynamics reconstructed from delay coordinates, while using the Kalman update formulation to assimilate new observations. We find that this hybrid approach results in comparable efficiency to parametric methods in identifying underlying dynamics, even in the presence of dynamical noise. By combining the Kalman-Takens method with an adaptive filtering procedure we are able to estimate the statistics of the observational and dynamical noise. This solves a long standing problem of separating dynamical and observational noise in time series data, which is especially challenging when no dynamical model is specified
    corecore