4,406 research outputs found
Penalized single-index quantile regression
This article is made available through the Brunel Open Access Publishing Fund. Copyright for this article is retained by the author(s), with first publication rights granted to the journal.
This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution
license (http://creativecommons.org/licenses/by/3.0/).The single-index (SI) regression and single-index quantile (SIQ) estimation methods product linear combinations of all the original predictors. However, it is possible that there are many unimportant predictors within the original predictors. Thus, the precision of parameter estimation as well as the accuracy of prediction will be effected by the existence of those unimportant predictors when the previous methods are used. In this article, an extension of the SIQ method of Wu et al. (2010) has been proposed, which considers Lasso and Adaptive Lasso for estimation and variable selection. Computational algorithms have been developed in order to calculate the penalized SIQ estimates. A simulation study and a real data application have been used to assess the performance of the methods under consideration
Fast calibrated additive quantile regression
We propose a novel framework for fitting additive quantile regression models,
which provides well calibrated inference about the conditional quantiles and
fast automatic estimation of the smoothing parameters, for model structures as
diverse as those usable with distributional GAMs, while maintaining equivalent
numerical efficiency and stability. The proposed methods are at once
statistically rigorous and computationally efficient, because they are based on
the general belief updating framework of Bissiri et al. (2016) to loss based
inference, but compute by adapting the stable fitting methods of Wood et al.
(2016). We show how the pinball loss is statistically suboptimal relative to a
novel smooth generalisation, which also gives access to fast estimation
methods. Further, we provide a novel calibration method for efficiently
selecting the 'learning rate' balancing the loss with the smoothing priors
during inference, thereby obtaining reliable quantile uncertainty estimates.
Our work was motivated by a probabilistic electricity load forecasting
application, used here to demonstrate the proposed approach. The methods
described here are implemented by the qgam R package, available on the
Comprehensive R Archive Network (CRAN)
Partially linear additive quantile regression in ultra-high dimension
We consider a flexible semiparametric quantile regression model for analyzing
high dimensional heterogeneous data. This model has several appealing features:
(1) By considering different conditional quantiles, we may obtain a more
complete picture of the conditional distribution of a response variable given
high dimensional covariates. (2) The sparsity level is allowed to be different
at different quantile levels. (3) The partially linear additive structure
accommodates nonlinearity and circumvents the curse of dimensionality. (4) It
is naturally robust to heavy-tailed distributions. In this paper, we
approximate the nonlinear components using B-spline basis functions. We first
study estimation under this model when the nonzero components are known in
advance and the number of covariates in the linear part diverges. We then
investigate a nonconvex penalized estimator for simultaneous variable selection
and estimation. We derive its oracle property for a general class of nonconvex
penalty functions in the presence of ultra-high dimensional covariates under
relaxed conditions. To tackle the challenges of nonsmooth loss function,
nonconvex penalty function and the presence of nonlinear components, we combine
a recently developed convex-differencing method with modern empirical process
techniques. Monte Carlo simulations and an application to a microarray study
demonstrate the effectiveness of the proposed method. We also discuss how the
method for a single quantile of interest can be extended to simultaneous
variable selection and estimation at multiple quantiles.Comment: Published at http://dx.doi.org/10.1214/15-AOS1367 in the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Interpretable statistics for complex modelling: quantile and topological learning
As the complexity of our data increased exponentially in the last decades, so has our
need for interpretable features. This thesis revolves around two paradigms to approach
this quest for insights.
In the first part we focus on parametric models, where the problem of interpretability
can be seen as a “parametrization selection”. We introduce a quantile-centric
parametrization and we show the advantages of our proposal in the context of regression,
where it allows to bridge the gap between classical generalized linear (mixed)
models and increasingly popular quantile methods.
The second part of the thesis, concerned with topological learning, tackles the
problem from a non-parametric perspective. As topology can be thought of as a way
of characterizing data in terms of their connectivity structure, it allows to represent
complex and possibly high dimensional through few features, such as the number of
connected components, loops and voids. We illustrate how the emerging branch of
statistics devoted to recovering topological structures in the data, Topological Data
Analysis, can be exploited both for exploratory and inferential purposes with a special
emphasis on kernels that preserve the topological information in the data.
Finally, we show with an application how these two approaches can borrow strength
from one another in the identification and description of brain activity through fMRI
data from the ABIDE project
Bayesian Quantile Regression for Single-Index Models
Using an asymmetric Laplace distribution, which provides a mechanism for
Bayesian inference of quantile regression models, we develop a fully Bayesian
approach to fitting single-index models in conditional quantile regression. In
this work, we use a Gaussian process prior for the unknown nonparametric link
function and a Laplace distribution on the index vector, with the latter
motivated by the recent popularity of the Bayesian lasso idea. We design a
Markov chain Monte Carlo algorithm for posterior inference. Careful
consideration of the singularity of the kernel matrix, and tractability of some
of the full conditional distributions leads to a partially collapsed approach
where the nonparametric link function is integrated out in some of the sampling
steps. Our simulations demonstrate the superior performance of the Bayesian
method versus the frequentist approach. The method is further illustrated by an
application to the hurricane data.Comment: 26 pages, 8 figures, 10 table
Additive models for quantile regression: model selection and confidence bandaids
Additive models for conditional quantile functions provide an attractive framework for nonparametric regression applications focused on features of the response beyond its central tendency. Total variation roughness penalities can be used to control the smoothness of the additive components much as squared Sobelev penalties are used for classical L 2 smoothing splines. We describe a general approach to estimation and inference for additive models of this type. We focus attention primarily on selection of smoothing parameters and on the construction of confidence bands for the nonparametric components. Both pointwise and uniform confidence bands are introduced; the uniform bands are based on the Hotelling (1939) tube approach. Some simulation evidence is presented to evaluate finite sample performance and the methods are also illustrated with an application to modeling childhood malnutrition in India.
- …