151,935 research outputs found
Estimation: On the Optimality of Linear Estimators
Consider the problem of estimating a random variable from noisy
observations , where is standard normal, under the fidelity
criterion. It is well known that the optimal Bayesian estimator in this setting
is the conditional median. This work shows that the only prior distribution on
that induces linearity in the conditional median is Gaussian.
Along the way, several other results are presented. In particular, it is
demonstrated that if the conditional distribution is symmetric for
all , then must follow a Gaussian distribution. Additionally, we
consider other losses and observe the following phenomenon: for , Gaussian is the only prior distribution that induces a linear optimal
Bayesian estimator, and for , infinitely many prior
distributions on can induce linearity. Finally, extensions are provided to
encompass noise models leading to conditional distributions from certain
exponential families
On Complexity of 1-Center in Various Metrics
We consider the classic 1-center problem: Given a set P of n points in a
metric space find the point in P that minimizes the maximum distance to the
other points of P. We study the complexity of this problem in d-dimensional
-metrics and in edit and Ulam metrics over strings of length d. Our
results for the 1-center problem may be classified based on d as follows.
Small d: We provide the first linear-time algorithm for 1-center
problem in fixed-dimensional metrics. On the other hand, assuming the
hitting set conjecture (HSC), we show that when , no
subquadratic algorithm can solve 1-center problem in any of the
-metrics, or in edit or Ulam metrics.
Large d. When , we extend our conditional lower bound
to rule out sub quartic algorithms for 1-center problem in edit metric
(assuming Quantified SETH). On the other hand, we give a
-approximation for 1-center in Ulam metric with running time
.
We also strengthen some of the above lower bounds by allowing approximations
or by reducing the dimension d, but only against a weaker class of algorithms
which list all requisite solutions. Moreover, we extend one of our hardness
results to rule out subquartic algorithms for the well-studied 1-median problem
in the edit metric, where given a set of n strings each of length n, the goal
is to find a string in the set that minimizes the sum of the edit distances to
the rest of the strings in the set
Recommended from our members
Which quantile is the most informative? Maximum likelihood, maximum entropy and quantile regression
This paper studies the connections among quantile regression, the asymmetric Laplace distribution, maximum likelihood and maximum entropy. We show that the maximum likelihood problem is equivalent to the solution of a maximum entropy problem where we impose moment constraints given by the joint consideration of the mean and median. Using the resulting score functions we propose an estimator based on the joint estimating equations. This approach delivers estimates for the slope parameters together with the associated “most probable” quantile. Similarly, this method can be seen as a penalized quantile regression estimator, where the penalty is given by deviations from the median regression. We derive the asymptotic properties of this estimator by showing consistency and asymptotic normality under certain regularity conditions. Finally, we illustrate the use of the estimator with a simple application to the U.S. wage data to evaluate the effect of training on wages
An Extended Result on the Optimal Estimation under Minimum Error Entropy Criterion
The minimum error entropy (MEE) criterion has been successfully used in
fields such as parameter estimation, system identification and the supervised
machine learning. There is in general no explicit expression for the optimal
MEE estimate unless some constraints on the conditional distribution are
imposed. A recent paper has proved that if the conditional density is
conditionally symmetric and unimodal (CSUM), then the optimal MEE estimate
(with Shannon entropy) equals the conditional median. In this study, we extend
this result to the generalized MEE estimation where the optimality criterion is
the Renyi entropy or equivalently, the \alpha-order information potential (IP).Comment: 15 pages, no figures, submitted to Entrop
- …