151,935 research outputs found

    L1L^1 Estimation: On the Optimality of Linear Estimators

    Full text link
    Consider the problem of estimating a random variable XX from noisy observations Y=X+ZY = X+ Z, where ZZ is standard normal, under the L1L^1 fidelity criterion. It is well known that the optimal Bayesian estimator in this setting is the conditional median. This work shows that the only prior distribution on XX that induces linearity in the conditional median is Gaussian. Along the way, several other results are presented. In particular, it is demonstrated that if the conditional distribution PXY=yP_{X|Y=y} is symmetric for all yy, then XX must follow a Gaussian distribution. Additionally, we consider other LpL^p losses and observe the following phenomenon: for p[1,2]p \in [1,2], Gaussian is the only prior distribution that induces a linear optimal Bayesian estimator, and for p(2,)p \in (2,\infty), infinitely many prior distributions on XX can induce linearity. Finally, extensions are provided to encompass noise models leading to conditional distributions from certain exponential families

    On Complexity of 1-Center in Various Metrics

    Get PDF
    We consider the classic 1-center problem: Given a set P of n points in a metric space find the point in P that minimizes the maximum distance to the other points of P. We study the complexity of this problem in d-dimensional p\ell_p-metrics and in edit and Ulam metrics over strings of length d. Our results for the 1-center problem may be classified based on d as follows. \bullet Small d: We provide the first linear-time algorithm for 1-center problem in fixed-dimensional 1\ell_1 metrics. On the other hand, assuming the hitting set conjecture (HSC), we show that when d=ω(logn)d=\omega(\log n), no subquadratic algorithm can solve 1-center problem in any of the p\ell_p-metrics, or in edit or Ulam metrics. \bullet Large d. When d=Ω(n)d=\Omega(n), we extend our conditional lower bound to rule out sub quartic algorithms for 1-center problem in edit metric (assuming Quantified SETH). On the other hand, we give a (1+ϵ)(1+\epsilon)-approximation for 1-center in Ulam metric with running time Oϵ~(nd+n2d)\tilde{O_{\epsilon}}(nd+n^2\sqrt{d}). We also strengthen some of the above lower bounds by allowing approximations or by reducing the dimension d, but only against a weaker class of algorithms which list all requisite solutions. Moreover, we extend one of our hardness results to rule out subquartic algorithms for the well-studied 1-median problem in the edit metric, where given a set of n strings each of length n, the goal is to find a string in the set that minimizes the sum of the edit distances to the rest of the strings in the set

    An Extended Result on the Optimal Estimation under Minimum Error Entropy Criterion

    Full text link
    The minimum error entropy (MEE) criterion has been successfully used in fields such as parameter estimation, system identification and the supervised machine learning. There is in general no explicit expression for the optimal MEE estimate unless some constraints on the conditional distribution are imposed. A recent paper has proved that if the conditional density is conditionally symmetric and unimodal (CSUM), then the optimal MEE estimate (with Shannon entropy) equals the conditional median. In this study, we extend this result to the generalized MEE estimation where the optimality criterion is the Renyi entropy or equivalently, the \alpha-order information potential (IP).Comment: 15 pages, no figures, submitted to Entrop
    corecore