8,653 research outputs found
Limit distribution theory for maximum likelihood estimation of a log-concave density
We find limiting distributions of the nonparametric maximum likelihood
estimator (MLE) of a log-concave density, that is, a density of the form
where is a concave function on .
The pointwise limiting distributions depend on the second and third derivatives
at 0 of , the "lower invelope" of an integrated Brownian motion process
minus a drift term depending on the number of vanishing derivatives of
at the point of interest. We also establish the limiting
distribution of the resulting estimator of the mode and establish a
new local asymptotic minimax lower bound which shows the optimality of our mode
estimator in terms of both rate of convergence and dependence of constants on
population values.Comment: Published in at http://dx.doi.org/10.1214/08-AOS609 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
An adjoint for likelihood maximization
The process of likelihood maximization can be found in many different areas of computational modelling. However, the construction of such models via likelihood maximization requires the solution of a difficult multi-modal optimization problem involving an expensive O(n3) factorization. The optimization techniques used to solve this problem may require many such factorizations and can result in a significant bottle-neck. This article derives an adjoint formulation of the likelihood employed in the construction of a kriging model via reverse algorithmic differentiation. This adjoint is found to calculate the likelihood and all of its derivatives more efficiently than the standard analytical method and can therefore be utilised within a simple local search or within a hybrid global optimization to accelerate convergence and therefore reduce the cost of the likelihood optimization
Bi-log-concave distribution functions
Nonparametric statistics for distribution functions F or densities f=F' under
qualitative shape constraints provides an interesting alternative to classical
parametric or entirely nonparametric approaches. We contribute to this area by
considering a new shape constraint: F is said to be bi-log-concave, if both
log(F) and log(1 - F) are concave. Many commonly considered distributions are
compatible with this constraint. For instance, any c.d.f. F with log-concave
density f = F' is bi-log-concave. But in contrast to the latter constraint,
bi-log-concavity allows for multimodal densities. We provide various
characterizations. It is shown that combining any nonparametric confidence band
for F with the new shape-constraint leads to substantial improvements,
particularly in the tails. To pinpoint this, we show that these confidence
bands imply non-trivial confidence bounds for arbitrary moments and the moment
generating function of F
Partially Adaptive Estimation via Maximum Entropy Densities
We propose a partially adaptive estimator based on information theoretic maximum entropy estimates of the error distribution. The maximum entropy (maxent) densities have simple yet flexible functional forms to nest most of the mathematical distributions. Unlike the nonparametric fully adaptive estimators, our parametric estimators do not involve choosing a bandwidth or trimming, and only require estimating a small number of nuisance parameters, which is desirable when the sample size is small. Monte Carlo simulations suggest that the proposed estimators fare well with non-normal error distributions. When the errors are normal, the efficiency loss due to redundant nuisance parameters is negligible as the proposed error densities nest the normal. The proposed partially adaptive estimator compares favorably with existing methods, especially when the sample size is small. We apply the estimator to a bio-pharmaceutical example and a stochastic frontier model.
- …