1,546 research outputs found
Cluster-Seeking James-Stein Estimators
This paper considers the problem of estimating a high-dimensional vector of
parameters from a noisy observation. The
noise vector is i.i.d. Gaussian with known variance. For a squared-error loss
function, the James-Stein (JS) estimator is known to dominate the simple
maximum-likelihood (ML) estimator when the dimension exceeds two. The
JS-estimator shrinks the observed vector towards the origin, and the risk
reduction over the ML-estimator is greatest for that lie
close to the origin. JS-estimators can be generalized to shrink the data
towards any target subspace. Such estimators also dominate the ML-estimator,
but the risk reduction is significant only when lies
close to the subspace. This leads to the question: in the absence of prior
information about , how do we design estimators that give
significant risk reduction over the ML-estimator for a wide range of
?
In this paper, we propose shrinkage estimators that attempt to infer the
structure of from the observed data in order to construct
a good attracting subspace. In particular, the components of the observed
vector are separated into clusters, and the elements in each cluster shrunk
towards a common attractor. The number of clusters and the attractor for each
cluster are determined from the observed vector. We provide concentration
results for the squared-error loss and convergence results for the risk of the
proposed estimators. The results show that the estimators give significant risk
reduction over the ML-estimator for a wide range of ,
particularly for large . Simulation results are provided to support the
theoretical claims.Marie Curie Career Integration Grant; 10.13039/501100004815-Early Career Grant from the Isaac Newton Trus
Recommended from our members
Cluster-Seeking Shrinkage Estimators
This paper considers the problem of estimating a high-dimensional vector θ ∈ ℝn from a noisy one-time observation. The noise vector is assumed to be i.i.d. Gaussian with known variance. For the squared-error loss function, the James-Stein (JS) estimator is known to dominate the simple maximum-likelihood (ML) estimator when the dimension n exceeds two. The JS-estimator shrinks the observed vector towards the origin, and the risk reduction over the ML-estimator is greatest for θ that lie close to the origin. JS-estimators can be generalized to shrink the data towards any target subspace. Such estimators also dominate the ML-estimator, but the risk reduction is significant only when θ lies close to the subspace. This leads to the question: in the absence of prior information about θ, how do we design estimators that give significant risk reduction over the ML-estimator for a wide range of θ? In this paper, we attempt to infer the structure of θ from the observed data in order to construct a good attracting subspace for the shrinkage estimator. We provide concentration results for the squared-error loss and convergence results for the risk of the proposed estimators, as well as simulation results to support the claims. The estimators give significant risk reduction over the ML-estimator for a wide range of θ, particularly for large n.This work was supported in part by a Marie Curie Career Integration Grant (Grant Agreement No. 631489) and an Early Career Grant from the Isaac Newton Trust
Evaluating Nationwide Health Interventions when Standard Before-After Doesn't Work: Malawi's ITN Distribution Program
Nationwide health interventions are difficult to evaluate as contemporaneous control groups do not exist and before-after approaches are usually infeasible. We propose an alternative semi-parametric estimator that is based on the assumption that the intervention has no direct effect on the health outcome but influences the outcome only through its effect on individual behavior. We show that in this case the evaluation problem can be divided into two parts: (i) the effect of the intervention on behavior, for which a conditional before-after assumption is more plausible; and (ii) the effect of the behaviour on the health outcome, where we exploit that a contemporaneous control groups exists for behavior. The proposed estimator is used to evaluate one of Malawi’s main malaria prevention campaigns, a nationwide insecticide-treated-net (ITN) distribution scheme, in terms of its effect on infant mortality. We exploit that the program affects child mortality only via bed net usage. We find that Malawi’s ITN distribution campaign reduced child mortality by 1 percentage point, which corresponds to about 30% of the total reduction in infant mortality over the study period.treatment effect, semi-parametric estimation, health intervention
A Discussion of an Empirical Bayes Multiple Comparison Technique
This paper considers the application and comparison of Bayesian and nonBayesian multiple comparison techniques applied to sets of chemical analysis data. Suggestions are also made as to which methods should be used
A Comparison of Small Area Estimation Methods for Poverty Mapping
Poverty maps are an important source of information on the regional distribution of
poverty and are currently used to support regional policy making and to allocate funds to
local jurisdictions. But obtaining accurate poverty maps at low levels of disaggregation is
not straightforward because of insufficient sample size of official surveys in some of the
target regions. Direct estimates, obtained with the region-specific sample data, are
unstable in the sense of having very large sampling errors for regions with small sample
size. Very unstable poverty estimates might make the seemingly poorer regions in one
period appear as the richer in the next period, which can be inconsistent. On the other
hand, very stable but biased estimates (e.g., too homogeneous across regions) might make
identification of the poorer regions difficult. Here we review the main small area
estimation methods for poverty mapping. In particular, we consider direct estimation, the
Fay-Herriot area level model, the method of Elbers, Lanjouw and Lanjouw (2003) used by
the World Bank, the empirical Best/Bayes (EB) method of Molina and Rao (2010) and its
extension, the Census EB, and finally the hierarchical Bayes proposal of Molina, Nandram
and Rao (2014). We put ourselves in the point of view of a practitioner and discuss, as
objectively as possible, the benefits and drawbacks of each method, illustrating some of
them through simulation studies
- …