50,900 research outputs found
Fast Covariance Estimation for High-dimensional Functional Data
For smoothing covariance functions, we propose two fast algorithms that scale
linearly with the number of observations per function. Most available methods
and software cannot smooth covariance matrices of dimension with
; the recently introduced sandwich smoother is an exception, but it is
not adapted to smooth covariance matrices of large dimensions such as . Covariance matrices of order , and even , are
becoming increasingly common, e.g., in 2- and 3-dimensional medical imaging and
high-density wearable sensor data. We introduce two new algorithms that can
handle very large covariance matrices: 1) FACE: a fast implementation of the
sandwich smoother and 2) SVDS: a two-step procedure that first applies singular
value decomposition to the data matrix and then smoothes the eigenvectors.
Compared to existing techniques, these new algorithms are at least an order of
magnitude faster in high dimensions and drastically reduce memory requirements.
The new algorithms provide instantaneous (few seconds) smoothing for matrices
of dimension and very fast ( 10 minutes) smoothing for
. Although SVDS is simpler than FACE, we provide ready to use,
scalable R software for FACE. When incorporated into R package {\it refund},
FACE improves the speed of penalized functional regression by an order of
magnitude, even for data of normal size (). We recommend that FACE be
used in practice for the analysis of noisy and high-dimensional functional
data.Comment: 35 pages, 4 figure
Structured Functional Principal Component Analysis
Motivated by modern observational studies, we introduce a class of functional
models that expands nested and crossed designs. These models account for the
natural inheritance of correlation structure from sampling design in studies
where the fundamental sampling unit is a function or image. Inference is based
on functional quadratics and their relationship with the underlying covariance
structure of the latent processes. A computationally fast and scalable
estimation procedure is developed for ultra-high dimensional data. Methods are
illustrated in three examples: high-frequency accelerometer data for daily
activity, pitch linguistic data for phonetic analysis, and EEG data for
studying electrical brain activity during sleep
Functional Regression
Functional data analysis (FDA) involves the analysis of data whose ideal
units of observation are functions defined on some continuous domain, and the
observed data consist of a sample of functions taken from some population,
sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the
development of this field, which has accelerated in the past 10 years to become
one of the fastest growing areas of statistics, fueled by the growing number of
applications yielding this type of data. One unique characteristic of FDA is
the need to combine information both across and within functions, which Ramsay
and Silverman called replication and regularization, respectively. This article
will focus on functional regression, the area of FDA that has received the most
attention in applications and methodological development. First will be an
introduction to basis functions, key building blocks for regularization in
functional regression methods, followed by an overview of functional regression
methods, split into three types: [1] functional predictor regression
(scalar-on-function), [2] functional response regression (function-on-scalar)
and [3] function-on-function regression. For each, the role of replication and
regularization will be discussed and the methodological development described
in a roughly chronological manner, at times deviating from the historical
timeline to group together similar methods. The primary focus is on modeling
and methodology, highlighting the modeling structures that have been developed
and the various regularization approaches employed. At the end is a brief
discussion describing potential areas of future development in this field
- …