29,095 research outputs found

    Knot selection by boosting techniques

    Get PDF
    A novel concept for estimating smooth functions by selection techniques based on boosting is developed. It is suggested to put radial basis functions with different spreads at each knot and to do selection and estimation simultaneously by a componentwise boosting algorithm. The methodology of various other smoothing and knot selection procedures (e.g. stepwise selection) is summarized. They are compared to the proposed approach by extensive simulations for various unidimensional settings, including varying spatial variation and heteroskedasticity, as well as on a real world data example. Finally, an extension of the proposed method to surface fitting is evaluated numerically on both, simulation and real data. The proposed knot selection technique is shown to be a strong competitor to existing methods for knot selection

    Warm-started wavefront reconstruction for adaptive optics

    Get PDF
    Future extreme adaptive optics (ExAO) systems have been suggested with up to 10^5 sensors and actuators. We analyze the computational speed of iterative reconstruction algorithms for such large systems. We compare a total of 15 different scalable methods, including multigrid, preconditioned conjugate-gradient, and several new variants of these. Simulations on a 128×128 square sensor/actuator geometry using Taylor frozen-flow dynamics are carried out using both open-loop and closed-loop measurements, and algorithms are compared on a basis of the mean squared error and floating-point multiplications required. We also investigate the use of warm starting, where the most recent estimate is used to initialize the iterative scheme. In open-loop estimation or pseudo-open-loop control, warm starting provides a significant computational speedup; almost every algorithm tested converges in one iteration. In a standard closed-loop implementation, using a single iteration per time step, most algorithms give the minimum error even in cold start, and every algorithm gives the minimum error if warm started. The best algorithm is therefore the one with the smallest computational cost per iteration, not necessarily the one with the best quasi-static performance

    Classical and Bayesian Linear Data Estimators for Unique Word OFDM

    Full text link
    Unique word - orthogonal frequency division multiplexing (UW-OFDM) is a novel OFDM signaling concept, where the guard interval is built of a deterministic sequence - the so-called unique word - instead of the conventional random cyclic prefix. In contrast to previous attempts with deterministic sequences in the guard interval the addressed UW-OFDM signaling approach introduces correlations between the subcarrier symbols, which can be exploited by the receiver in order to improve the bit error ratio performance. In this paper we develop several linear data estimators specifically designed for UW-OFDM, some based on classical and some based on Bayesian estimation theory. Furthermore, we derive complexity optimized versions of these estimators, and we study their individual complex multiplication count in detail. Finally, we evaluate the estimators' performance for the additive white Gaussian noise channel as well as for selected indoor multipath channel scenarios.Comment: Preprint, 13 page

    A Simplified Crossing Fiber Model in Diffusion Weighted Imaging

    Get PDF
    Diffusion MRI (dMRI) is a vital source of imaging data for identifying anatomical connections in the living human brain that form the substrate for information transfer between brain regions. dMRI can thus play a central role toward our understanding of brain function. The quantitative modeling and analysis of dMRI data deduces the features of neural fibers at the voxel level, such as direction and density. The modeling methods that have been developed range from deterministic to probabilistic approaches. Currently, the Ball-and-Stick model serves as a widely implemented probabilistic approach in the tractography toolbox of the popular FSL software package and FreeSurfer/TRACULA software package. However, estimation of the features of neural fibers is complex under the scenario of two crossing neural fibers, which occurs in a sizeable proportion of voxels within the brain. A Bayesian non-linear regression is adopted, comprised of a mixture of multiple non-linear components. Such models can pose a difficult statistical estimation problem computationally. To make the approach of Ball-and-Stick model more feasible and accurate, we propose a simplified version of Ball-and-Stick model that reduces parameter space dimensionality. This simplified model is vastly more efficient in the terms of computation time required in estimating parameters pertaining to two crossing neural fibers through Bayesian simulation approaches. Moreover, the performance of this new model is comparable or better in terms of bias and estimation variance as compared to existing models

    Integrating Data Transformation in Principal Components Analysis

    Get PDF
    Principal component analysis (PCA) is a popular dimension-reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples. Supplementary materials for this article are available online

    Computing missing values in time series

    Get PDF
    This work presents two algorithms to estimate missing values in time series. The first is the Kalman Filter, as developed by Kohn and Ansley (1986) and others. The second is the additive outlier approach, developed by Pefia, Ljung and Maravall. Both are exact and lead to the same results. However, the first is, in general, faster and the second more flexible
    corecore