9,776 research outputs found

    Estimation from quantized Gaussian measurements: when and how to use dither

    Full text link
    Subtractive dither is a powerful method for removing the signal dependence of quantization noise for coarsely quantized signals. However, estimation from dithered measurements often naively applies the sample mean or midrange, even when the total noise is not well described with a Gaussian or uniform distribution. We show that the generalized Gaussian distribution approximately describes subtractively dithered, quantized samples of a Gaussian signal. Furthermore, a generalized Gaussian fit leads to simple estimators based on order statistics that match the performance of more complicated maximum likelihood estimators requiring iterative solvers. The order statistics-based estimators outperform both the sample mean and midrange for nontrivial sums of Gaussian and uniform noise. Additional analysis of the generalized Gaussian approximation yields rules of thumb for determining when and how to apply dither to quantized measurements. Specifically, we find subtractive dither to be beneficial when the ratio between the Gaussian standard deviation and quantization interval length is roughly less than one-third. When that ratio is also greater than 0.822/K^0.930 for the number of measurements K > 20, estimators we present are more efficient than the midrange.https://arxiv.org/abs/1811.06856Accepted manuscrip

    Compressed matched filter for non-Gaussian noise

    Full text link
    We consider estimation of a deterministic unknown parameter vector in a linear model with non-Gaussian noise. In the Gaussian case, dimensionality reduction via a linear matched filter provides a simple low dimensional sufficient statistic which can be easily communicated and/or stored for future inference. Such a statistic is usually unknown in the general non-Gaussian case. Instead, we propose a hybrid matched filter coupled with a randomized compressed sensing procedure, which together create a low dimensional statistic. We also derive a complementary algorithm for robust reconstruction given this statistic. Our recovery method is based on the fast iterative shrinkage and thresholding algorithm which is used for outlier rejection given the compressed data. We demonstrate the advantages of the proposed framework using synthetic simulations

    Computationally intensive Value at Risk calculations

    Get PDF
    Market risks are the prospect of financial losses- or gains- due to unexpected changes in market prices and rates. Evaluating the exposure to such risks is nowadays of primary concern to risk managers in financial and non-financial institutions alike. Until late 1980s market risks were estimated through gap and duration analysis (interest rates), portfolio theory (securities), sensitivity analysis (derivatives) or "what-if" scenarios. However, all these methods either could be applied only to very specific assets or relied on subjective reasoning. --

    Cramer-von Mises and Anderson-Darling goodness of fit tests for extreme value distributions with unknown parameters

    Get PDF
    The use of goodness of fit tests based on Cramer-von Mises and Anderson-Darling statistics is discussed, with reference to the composite hypothesis that a sample of observations comes from a distribution, FH, whose parameters are unspecified. When this is the case, the critical region of the test has to be redetermined for each hypothetical distribution FH. To avoid this difficulty, a transformation is proposed that produces a new test statistic which is independent of FH. This transformation involves three coefficients that are determined using the asymptotic theory of tests based on the empirical distribution function. A single table of coefficients is thus sufficient for carrying out the test with different hypothetical distributions; a set of probability models of common use in extreme value analysis is considered here, including the following: extreme value 1 and 2, normal and lognormal, generalized extreme value, three-parameter gamma, and log-Pearson type 3, in all cases with parameters estimated using maximum likelihood. Monte Carlo simulations are used to determine small sample corrections and to assess the power of the tests compared to alternative approaches

    Space Time MUSIC: Consistent Signal Subspace Estimation for Wide-band Sensor Arrays

    Full text link
    Wide-band Direction of Arrival (DOA) estimation with sensor arrays is an essential task in sonar, radar, acoustics, biomedical and multimedia applications. Many state of the art wide-band DOA estimators coherently process frequency binned array outputs by approximate Maximum Likelihood, Weighted Subspace Fitting or focusing techniques. This paper shows that bin signals obtained by filter-bank approaches do not obey the finite rank narrow-band array model, because spectral leakage and the change of the array response with frequency within the bin create \emph{ghost sources} dependent on the particular realization of the source process. Therefore, existing DOA estimators based on binning cannot claim consistency even with the perfect knowledge of the array response. In this work, a more realistic array model with a finite length of the sensor impulse responses is assumed, which still has finite rank under a space-time formulation. It is shown that signal subspaces at arbitrary frequencies can be consistently recovered under mild conditions by applying MUSIC-type (ST-MUSIC) estimators to the dominant eigenvectors of the wide-band space-time sensor cross-correlation matrix. A novel Maximum Likelihood based ST-MUSIC subspace estimate is developed in order to recover consistency. The number of sources active at each frequency are estimated by Information Theoretic Criteria. The sample ST-MUSIC subspaces can be fed to any subspace fitting DOA estimator at single or multiple frequencies. Simulations confirm that the new technique clearly outperforms binning approaches at sufficiently high signal to noise ratio, when model mismatches exceed the noise floor.Comment: 15 pages, 10 figures. Accepted in a revised form by the IEEE Trans. on Signal Processing on 12 February 1918. @IEEE201

    Optimal Conditionally Unbiased Bounded-Influence Inference in Dynamic Location and Scale Models

    Get PDF
    This paper studies the local robustness of estimators and tests for the conditional location and scale parameters in a strictly stationary time series model. We first derive optimal bounded-influence estimators for such settings under a conditionally Gaussian reference model. Based on these results, optimal bounded-influence versions of the classical likelihood-based tests for parametric hypotheses are obtained. We propose a feasible and efficient algorithm for the computation of our robust estimators, which makes use of analytical Laplace approximations to estimate the auxiliary recentering vectors ensuring Fisher consistency in robust estimation. This strongly reduces the necessary computation time by avoiding the simulation of multidimensional integrals, a task that has typically to be addressed in the robust estimation of nonlinear models for time series. In some Monte Carlo simulations of an AR(1)-ARCH(1) process we show that our robust procedures maintain a very high efficiency under ideal model conditions and at the same time perform very satisfactorily under several forms of departure from conditional normality. On the contrary, classical Pseudo Maximum Likelihood inference procedures are found to be highly inefficient under such local model misspecifications. These patterns are confirmed by an application to robust testing for ARCH.Time series models, M-estimators, influence function, robust estimation and testing
    • …
    corecore