7,523 research outputs found
Functional Regression
Functional data analysis (FDA) involves the analysis of data whose ideal
units of observation are functions defined on some continuous domain, and the
observed data consist of a sample of functions taken from some population,
sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the
development of this field, which has accelerated in the past 10 years to become
one of the fastest growing areas of statistics, fueled by the growing number of
applications yielding this type of data. One unique characteristic of FDA is
the need to combine information both across and within functions, which Ramsay
and Silverman called replication and regularization, respectively. This article
will focus on functional regression, the area of FDA that has received the most
attention in applications and methodological development. First will be an
introduction to basis functions, key building blocks for regularization in
functional regression methods, followed by an overview of functional regression
methods, split into three types: [1] functional predictor regression
(scalar-on-function), [2] functional response regression (function-on-scalar)
and [3] function-on-function regression. For each, the role of replication and
regularization will be discussed and the methodological development described
in a roughly chronological manner, at times deviating from the historical
timeline to group together similar methods. The primary focus is on modeling
and methodology, highlighting the modeling structures that have been developed
and the various regularization approaches employed. At the end is a brief
discussion describing potential areas of future development in this field
Regularization and Bayesian Learning in Dynamical Systems: Past, Present and Future
Regularization and Bayesian methods for system identification have been
repopularized in the recent years, and proved to be competitive w.r.t.
classical parametric approaches. In this paper we shall make an attempt to
illustrate how the use of regularization in system identification has evolved
over the years, starting from the early contributions both in the Automatic
Control as well as Econometrics and Statistics literature. In particular we
shall discuss some fundamental issues such as compound estimation problems and
exchangeability which play and important role in regularization and Bayesian
approaches, as also illustrated in early publications in Statistics. The
historical and foundational issues will be given more emphasis (and space), at
the expense of the more recent developments which are only briefly discussed.
The main reason for such a choice is that, while the recent literature is
readily available, and surveys have already been published on the subject, in
the author's opinion a clear link with past work had not been completely
clarified.Comment: Plenary Presentation at the IFAC SYSID 2015. Submitted to Annual
Reviews in Contro
Regression on manifolds: Estimation of the exterior derivative
Collinearity and near-collinearity of predictors cause difficulties when
doing regression. In these cases, variable selection becomes untenable because
of mathematical issues concerning the existence and numerical stability of the
regression coefficients, and interpretation of the coefficients is ambiguous
because gradients are not defined. Using a differential geometric
interpretation, in which the regression coefficients are interpreted as
estimates of the exterior derivative of a function, we develop a new method to
do regression in the presence of collinearities. Our regularization scheme can
improve estimation error, and it can be easily modified to include lasso-type
regularization. These estimators also have simple extensions to the "large ,
small " context.Comment: Published in at http://dx.doi.org/10.1214/10-AOS823 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
A Selective Review of Group Selection in High-Dimensional Models
Grouping structures arise naturally in many statistical modeling problems.
Several methods have been proposed for variable selection that respect grouping
structure in variables. Examples include the group LASSO and several concave
group selection methods. In this article, we give a selective review of group
selection concerning methodological developments, theoretical properties and
computational algorithms. We pay particular attention to group selection
methods involving concave penalties. We address both group selection and
bi-level selection methods. We describe several applications of these methods
in nonparametric additive models, semiparametric regression, seemingly
unrelated regressions, genomic data analysis and genome wide association
studies. We also highlight some issues that require further study.Comment: Published in at http://dx.doi.org/10.1214/12-STS392 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …