17 research outputs found
Generalized Pseudolikelihood Methods for Inverse Covariance Estimation
We introduce PseudoNet, a new pseudolikelihood-based estimator of the inverse
covariance matrix, that has a number of useful statistical and computational
properties. We show, through detailed experiments with synthetic and also
real-world finance as well as wind power data, that PseudoNet outperforms
related methods in terms of estimation error and support recovery, making it
well-suited for use in a downstream application, where obtaining low estimation
error can be important. We also show, under regularity conditions, that
PseudoNet is consistent. Our proof assumes the existence of accurate estimates
of the diagonal entries of the underlying inverse covariance matrix; we
additionally provide a two-step method to obtain these estimates, even in a
high-dimensional setting, going beyond the proofs for related methods. Unlike
other pseudolikelihood-based methods, we also show that PseudoNet does not
saturate, i.e., in high dimensions, there is no hard limit on the number of
nonzero entries in the PseudoNet estimate. We present a fast algorithm as well
as screening rules that make computing the PseudoNet estimate over a range of
tuning parameters tractable
Confidence bands for a log-concave density
We present a new approach for inference about a log-concave distribution:
Instead of using the method of maximum likelihood, we propose to incorporate
the log-concavity constraint in an appropriate nonparametric confidence set for
the cdf . This approach has the advantage that it automatically provides a
measure of statistical uncertainty and it thus overcomes a marked limitation of
the maximum likelihood estimate. In particular, we show how to construct
confidence bands for the density that have a finite sample guaranteed
confidence level. The nonparametric confidence set for which we introduce
here has attractive computational and statistical properties: It allows to
bring modern tools from optimization to bear on this problem via difference of
convex programming, and it results in optimal statistical inference. We show
that the width of the resulting confidence bands converges at nearly the
parametric rate when the log density is -affine.Comment: Added more experiments, other minor change
Robust Validation: Confident Predictions Even When Distributions Shift
While the traditional viewpoint in machine learning and statistics assumes
training and testing samples come from the same population, practice belies
this fiction. One strategy---coming from robust statistics and
optimization---is thus to build a model robust to distributional perturbations.
In this paper, we take a different approach to describe procedures for robust
predictive inference, where a model provides uncertainty estimates on its
predictions rather than point predictions. We present a method that produces
prediction sets (almost exactly) giving the right coverage level for any test
distribution in an -divergence ball around the training population. The
method, based on conformal inference, achieves (nearly) valid coverage in
finite samples, under only the condition that the training data be
exchangeable. An essential component of our methodology is to estimate the
amount of expected future data shift and build robustness to it; we develop
estimators and prove their consistency for protection and validity of
uncertainty estimates under shifts. By experimenting on several large-scale
benchmark datasets, including Recht et al.'s CIFAR-v4 and ImageNet-V2 datasets,
we provide complementary empirical results that highlight the importance of
robust predictive validity.Comment: 35 pages, 6 figure