256 research outputs found

    Spectral Density Bandwidth Choice: Source of Nonmonotonic Power for Tests of a Mean Shift in a Time Series

    Get PDF
    Data dependent bandwidth choices for zero frequency spectral density estimators of a time series are shown to be an important source of nonmonotonic power when testing for a shift in mean. It is shown that if the spectral density is estimated under the null hypothesis of a stable mean using a data dependent bandwidth (with or without prewhitening), non-monotonic power appears naturally for some popular tests including the CUSUM test. On the other hand, under some fixed bandwidth choices, power is monotonic. Empirical examples and simulations illustrate these power properties. Theoretical explanations for the power results are provided.

    Bayesian Functional Data Analysis Using WinBUGS

    Get PDF
    We provide user friendly software for Bayesian analysis of functional data models using \pkg{WinBUGS}~1.4. The excellent properties of Bayesian analysis in this context are due to: (1) dimensionality reduction, which leads to low dimensional projection bases; (2) mixed model representation of functional models, which provides a modular approach to model extension; and (3) orthogonality of the principal component bases, which contributes to excellent chain convergence and mixing properties. Our paper provides one more, essential, reason for using Bayesian analysis for functional models: the existence of software.

    Longitudinal Scalar-on-Function Regression with Application to Tractography Data

    Get PDF
    We propose a class of estimation techniques for scalar-on-function regression in longitudinal studies where both outcomes, such as test results on motor functions, and functional predictors, such as brain images, may be observed at multiple visits. Our methods are motivated by a longitudinal brain diffusion tensor imaging (DTI) tractography study. One of the primary goals of the study is to evaluate the contemporaneous association between human function and brain imaging over time. The complexity of the study requires development of methods that can simultaneously incorporate: (1) multiple functional (and scalar) regressors; (2) longitudinal outcome and functional predictors measurements per patient; (3) Gaussian or non-Gaussian outcomes; and, (4) missing values within functional predictors. We review existing approaches designed to handle such types of data and discuss their limitations. We propose two versions of a new method, longitudinal functional principal components regression. These methods extend the well-known functional principal component regression and allow for different effects of subject-specific trends in curves and of visit-specific deviations from that trend. The different methods are compared in simulation studies, and the most promising approaches are used for analyzing the tractography data

    Fast Covariance Estimation for High-dimensional Functional Data

    Get PDF
    For smoothing covariance functions, we propose two fast algorithms that scale linearly with the number of observations per function. Most available methods and software cannot smooth covariance matrices of dimension J×JJ \times J with J>500J>500; the recently introduced sandwich smoother is an exception, but it is not adapted to smooth covariance matrices of large dimensions such as J10,000J \ge 10,000. Covariance matrices of order J=10,000J=10,000, and even J=100,000J=100,000, are becoming increasingly common, e.g., in 2- and 3-dimensional medical imaging and high-density wearable sensor data. We introduce two new algorithms that can handle very large covariance matrices: 1) FACE: a fast implementation of the sandwich smoother and 2) SVDS: a two-step procedure that first applies singular value decomposition to the data matrix and then smoothes the eigenvectors. Compared to existing techniques, these new algorithms are at least an order of magnitude faster in high dimensions and drastically reduce memory requirements. The new algorithms provide instantaneous (few seconds) smoothing for matrices of dimension J=10,000J=10,000 and very fast (<< 10 minutes) smoothing for J=100,000J=100,000. Although SVDS is simpler than FACE, we provide ready to use, scalable R software for FACE. When incorporated into R package {\it refund}, FACE improves the speed of penalized functional regression by an order of magnitude, even for data of normal size (J<500J <500). We recommend that FACE be used in practice for the analysis of noisy and high-dimensional functional data.Comment: 35 pages, 4 figure

    Bayesian Analysis for Penalized Spline Regression Using WinBUGS

    Get PDF
    Penalized splines can be viewed as BLUPs in a mixed model framework, which allows the use of mixed model software for smoothing. Thus, software originally developed for Bayesian analysis of mixed models can be used for penalized spline regression. Bayesian inference for nonparametric models enjoys the flexibility of nonparametric models and the exact inference provided by the Bayesian inferential machinery. This paper provides a simple, yet comprehensive, set of programs for the implementation of nonparametric Bayesian analysis in WinBUGS. Good mixing properties of the MCMC chains are obtained by using low-rank thin-plate splines, while simulation times per iteration are reduced employing WinBUGS specific computational tricks.

    Fast Generalized Functional Principal Components Analysis

    Full text link
    We propose a new fast generalized functional principal components analysis (fast-GFPCA) algorithm for dimension reduction of non-Gaussian functional data. The method consists of: (1) binning the data within the functional domain; (2) fitting local random intercept generalized linear mixed models in every bin to obtain the initial estimates of the person-specific functional linear predictors; (3) using fast functional principal component analysis to smooth the linear predictors and obtain their eigenfunctions; and (4) estimating the global model conditional on the eigenfunctions of the linear predictors. An extensive simulation study shows that fast-GFPCA performs as well or better than existing state-of-the-art approaches, it is orders of magnitude faster than existing general purpose GFPCA methods, and scales up well with both the number of observed curves and observations per curve. Methods were motivated by and applied to a study of active/inactive physical activity profiles obtained from wearable accelerometers in the NHANES 2011-2014 study. The method can be implemented by any user familiar with mixed model software, though the R package fastGFPCA is provided for convenience

    MULTILEVEL SPARSE FUNCTIONAL PRINCIPAL COMPONENT ANALYSIS

    Get PDF
    The basic observational unit in this paper is a function. Data are assumed to have a natural hierarchy of basic units. A simple example is when functions are recorded at multiple visits for the same subject. Di et al. (2009) proposed Multilevel Functional Principal Component Analysis (MFPCA) for this type of data structure when functions are densely sampled. Here we consider the case when functions are sparsely sampled and may contain as few as 2 or 3 observations per function. As with MFPCA, we exploit the multilevel structure of covariance operators and data reduction induced by the use of principal component bases. However, we address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional scores and predict the underlying curves. We show that in the sparse context 1) is harder and propose an algorithm to circumvent the problem. Surprisingly, we show that 2) is easier via new BLUP calculations. Using simulations and real data analysis we show that the ability of our method to reconstruct underlying curves with few observations is stunning. This approach is illustrated by an application to the Sleep Heart Health Study, which contains two electroencephalographic (EEG) series at two visits for each subject

    Bayesian Functional Data Analysis Using WinBUGS

    Get PDF
    We provide user friendly software for Bayesian analysis of functional data models using pkg{WinBUGS}~1.4. The excellent properties of Bayesian analysis in this context are due to: (1) dimensionality reduction, which leads to low dimensional projection bases; (2) mixed model representation of functional models, which provides a modular approach to model extension; and (3) orthogonality of the principal component bases, which contributes to excellent chain convergence and mixing properties. Our paper provides one more, essential, reason for using Bayesian analysis for functional models: the existence of software

    Bayesian Analysis for Penalized Spline Regression Using Win BUGS

    Get PDF
    Penalized splines can be viewed as BLUPs in a mixed model framework, which allows the use of mixed model software for smoothing. Thus, software originally developed for Bayesian analysis of mixed models can be used for penalized spline regression. Bayesian inference for nonparametric models enjoys the flexibility of nonparametric models and the exact inference provided by the Bayesian inferential machinery. This paper provides a simple, yet comprehensive, set of programs for the implementation of nonparametric Bayesian analysis in WinBUGS. MCMC mixing is substantially improved from the previous versions by using low{rank thin{plate splines instead of truncated polynomial basis. Simulation time per iteration is reduced 5 to 10 times using a computational trick
    corecore