802 research outputs found
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number
The purpose of the present paper is to assess the efficacy of confidence
intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is
highly used by researchers, its statistical properties are largely unexplored.
First of all, we developed statistical theory which allowed us to produce
confidence intervals for Rosenthal's fail-safe number.This was produced by
discerning whether the number of studies analysed in a meta-analysis is fixed
or random. Each case produces different variance estimators. For a given number
of studies and a given distribution, we provided five variance estimators.
Confidence intervals are examined with a normal approximation and a
nonparametric bootstrap. The accuracy of the different confidence interval
estimates was then tested by methods of simulation under different
distributional assumptions. The half normal distribution variance estimator has
the best probability coverage. Finally, we provide a table of lower confidence
intervals for Rosenthal's estimator.Comment: Published in the International Scholarly Research Notices in December
201
Contour regression: A general approach to dimension reduction
We propose a novel approach to sufficient dimension reduction in regression,
based on estimating contour directions of small variation in the response.
These directions span the orthogonal complement of the minimal space relevant
for the regression and can be extracted according to two measures of variation
in the response, leading to simple and general contour regression (SCR and GCR)
methodology. In comparison with existing sufficient dimension reduction
techniques, this contour-based methodology guarantees exhaustive estimation of
the central subspace under ellipticity of the predictor distribution and mild
additional assumptions, while maintaining \sqrtn-consistency and computational
ease. Moreover, it proves robust to departures from ellipticity. We establish
population properties for both SCR and GCR, and asymptotic properties for SCR.
Simulations to compare performance with that of standard techniques such as
ordinary least squares, sliced inverse regression, principal Hessian directions
and sliced average variance estimation confirm the advantages anticipated by
the theoretical analyses. We demonstrate the use of contour-based methods on a
data set concerning soil evaporation.Comment: Published at http://dx.doi.org/10.1214/009053605000000192 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Inference for the neighborhood inequality index
The neighborhood inequality (NI) index measures aspects of spatial inequality in the distribution of incomes within a city. The NI index is a population average of the normalized income gap between each individual's income (observed at a given location in the city) and the incomes of the neighbors located within a certain distance range. The approach overcomes the Modiable Areal Units Problem affecting local inequality measures. This paper provides minimum bounds for the NI index standard error and shows that unbiased estimators can be identied under fairly common hypothesis in spatial statistics. Results from a Monte Carlo study support the relevance of the approximations. Rich income data are then used to infer about trends of neighborhood inequality in Chicago, IL over the last 35 years
Bayesian inference for group-level cortical surface image-on-scalar-regression with Gaussian process priors
In regression-based analyses of group-level neuroimage data researchers
typically fit a series of marginal general linear models to image outcomes at
each spatially-referenced pixel. Spatial regularization of effects of interest
is usually induced indirectly by applying spatial smoothing to the data during
preprocessing. While this procedure often works well, resulting inference can
be poorly calibrated. Spatial modeling of effects of interest leads to more
powerful analyses, however the number of locations in a typical neuroimage can
preclude standard computation with explicitly spatial models. Here we
contribute a Bayesian spatial regression model for group-level neuroimaging
analyses. We induce regularization of spatially varying regression coefficient
functions through Gaussian process priors. When combined with a simple
nonstationary model for the error process, our prior hierarchy can lead to more
data-adaptive smoothing than standard methods. We achieve computational
tractability through Vecchia approximation of our prior which, critically, can
be constructed for a wide class of spatial correlation functions and results in
prior models that retain full spatial rank. We outline several ways to work
with our model in practice and compare performance against standard vertex-wise
analyses. Finally we illustrate our method in an analysis of cortical surface
fMRI task contrast data from a large cohort of children enrolled in the
Adolescent Brain Cognitive Development study
Approaches for Outlier Detection in Sparse High-Dimensional Regression Models
Modern regression studies often encompass a very large number of potential predictors,
possibly larger than the sample size, and sometimes growing with the sample
size itself. This increases the chances that a substantial portion of the predictors
is redundant, as well as the risk of data contamination. Tackling these problems is
of utmost importance to facilitate scientific discoveries, since model estimates are
highly sensitive both to the choice of predictors and to the presence of outliers. In
this thesis, we contribute to this area considering the problem of robust model selection
in a variety of settings, where outliers may arise both in the response and
the predictors. Our proposals simplify model interpretation, guarantee predictive
performance, and allow us to study and control the influence of outlying cases on
the fit.
First, we consider the co-occurrence of multiple mean-shift and variance-inflation
outliers in low-dimensional linear models. We rely on robust estimation techniques
to identify outliers of each type, exclude mean-shift outliers, and use restricted
maximum likelihood estimation to down-weight and accommodate variance-inflation
outliers into the model fit. Second, we extend our setting to high-dimensional linear
models. We show that mean-shift and variance-inflation outliers can be modeled as
additional fixed and random components, respectively, and evaluated independently.
Specifically, we perform feature selection and mean-shift outlier detection through
a robust class of nonconcave penalization methods, and variance-inflation outlier
detection through the penalization of the restricted posterior mode. The resulting
approach satisfies a robust oracle property for feature selection in the presence of
data contamination – which allows the number of features to exponentially increase
with the sample size – and detects truly outlying cases of each type with asymptotic
probability one. This provides an optimal trade-off between a high breakdown point
and efficiency. Third, focusing on high-dimensional linear models affected by meanshift
outliers, we develop a general framework in which L0-constraints coupled with
mixed-integer programming techniques are used to perform simultaneous feature
selection and outlier detection with provably optimal guarantees. In particular,
we provide necessary and sufficient conditions for a robustly strong oracle property,
where again the number of features can increase exponentially with the sample size,
and prove optimality for parameter estimation and the resulting breakdown point.
Finally, we consider generalized linear models and rely on logistic slippage to perform
outlier detection and removal in binary classification. Here we use L0-constraints
and mixed-integer conic programming techniques to solve the underlying double
combinatorial problem of feature selection and outlier detection, and the framework
allows us again to pursue optimality guarantees.
For all the proposed approaches, we also provide computationally lean heuristic
algorithms, tuning procedures, and diagnostic tools which help to guide the analysis.
We consider several real-world applications, including the study of the relationships
between childhood obesity and the human microbiome, and of the main drivers of
honey bee loss. All methods developed and data used, as well as the source code to
replicate our analyses, are publicly available
- …