4,065 research outputs found
Functional Regression
Functional data analysis (FDA) involves the analysis of data whose ideal
units of observation are functions defined on some continuous domain, and the
observed data consist of a sample of functions taken from some population,
sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the
development of this field, which has accelerated in the past 10 years to become
one of the fastest growing areas of statistics, fueled by the growing number of
applications yielding this type of data. One unique characteristic of FDA is
the need to combine information both across and within functions, which Ramsay
and Silverman called replication and regularization, respectively. This article
will focus on functional regression, the area of FDA that has received the most
attention in applications and methodological development. First will be an
introduction to basis functions, key building blocks for regularization in
functional regression methods, followed by an overview of functional regression
methods, split into three types: [1] functional predictor regression
(scalar-on-function), [2] functional response regression (function-on-scalar)
and [3] function-on-function regression. For each, the role of replication and
regularization will be discussed and the methodological development described
in a roughly chronological manner, at times deviating from the historical
timeline to group together similar methods. The primary focus is on modeling
and methodology, highlighting the modeling structures that have been developed
and the various regularization approaches employed. At the end is a brief
discussion describing potential areas of future development in this field
Ordinal Probit Functional Regression Models with Application to Computer-Use Behavior in Rhesus Monkeys
Research in functional regression has made great strides in expanding to
non-Gaussian functional outcomes, however the exploration of ordinal functional
outcomes remains limited. Motivated by a study of computer-use behavior in
rhesus macaques (\emph{Macaca mulatta}), we introduce the Ordinal Probit
Functional Regression Model or OPFRM to perform ordinal function-on-scalar
regression. The OPFRM is flexibly formulated to allow for the choice of
different basis functions including penalized B-splines, wavelets, and
O'Sullivan splines. We demonstrate the operating characteristics of the model
in simulation using a variety of underlying covariance patterns showing the
model performs reasonably well in estimation under multiple basis functions. We
also present and compare two approaches for conducting posterior inference
showing that joint credible intervals tend to out perform point-wise credible.
Finally, in application, we determine demographic factors associated with the
monkeys' computer use over the course of a year and provide a brief analysis of
the findings
Learning Active Basis Models by EM-Type Algorithms
EM algorithm is a convenient tool for maximum likelihood model fitting when
the data are incomplete or when there are latent variables or hidden states. In
this review article we explain that EM algorithm is a natural computational
scheme for learning image templates of object categories where the learning is
not fully supervised. We represent an image template by an active basis model,
which is a linear composition of a selected set of localized, elongated and
oriented wavelet elements that are allowed to slightly perturb their locations
and orientations to account for the deformations of object shapes. The model
can be easily learned when the objects in the training images are of the same
pose, and appear at the same location and scale. This is often called
supervised learning. In the situation where the objects may appear at different
unknown locations, orientations and scales in the training images, we have to
incorporate the unknown locations, orientations and scales as latent variables
into the image generation process, and learn the template by EM-type
algorithms. The E-step imputes the unknown locations, orientations and scales
based on the currently learned template. This step can be considered
self-supervision, which involves using the current template to recognize the
objects in the training images. The M-step then relearns the template based on
the imputed locations, orientations and scales, and this is essentially the
same as supervised learning. So the EM learning process iterates between
recognition and supervised learning. We illustrate this scheme by several
experiments.Comment: Published in at http://dx.doi.org/10.1214/09-STS281 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Parameter selection in sparsity-driven SAR imaging
We consider a recently developed sparsity-driven synthetic aperture radar (SAR) imaging approach which can produce superresolution, feature-enhanced images. However, this regularization-based approach requires the selection of a hyper-parameter in order to generate such high-quality images. In this paper we present a number of techniques for automatically selecting the hyper-parameter
involved in this problem. In particular, we propose and develop numerical procedures for the use of Stein’s unbiased risk estimation, generalized cross-validation, and L-curve techniques for automatic parameter choice. We demonstrate and compare the effectiveness of these procedures through experiments based on both simple synthetic scenes, as well as electromagnetically simulated realistic data. Our results suggest that sparsity-driven SAR imaging coupled with the proposed automatic parameter choice procedures offers significant improvements over conventional SAR imaging
- …