2,464 research outputs found

    LASSO ISOtone for High Dimensional Additive Isotonic Regression

    Full text link
    Additive isotonic regression attempts to determine the relationship between a multi-dimensional observation variable and a response, under the constraint that the estimate is the additive sum of univariate component effects that are monotonically increasing. In this article, we present a new method for such regression called LASSO Isotone (LISO). LISO adapts ideas from sparse linear modelling to additive isotonic regression. Thus, it is viable in many situations with high dimensional predictor variables, where selection of significant versus insignificant variables are required. We suggest an algorithm involving a modification of the backfitting algorithm CPAV. We give a numerical convergence result, and finally examine some of its properties through simulations. We also suggest some possible extensions that improve performance, and allow calculation to be carried out when the direction of the monotonicity is unknown

    Nonlinear Structural Functional Models

    Get PDF
    A common objective in functional data analyses is the registration of data curves and estimation of the locations of their salient structures, such as spikes or local extrema. Existing methods separate curve modeling and structure estimation into disjoint steps, optimize different criteria for estimation, or recast the problem into the testing framework. Moreover, curve registration is often implemented in a pre-processing step. The aim of this dissertation is to ameliorate the shortcomings of existing methods through the development of unified nonlinear modeling procedures for the analysis of structural functional data. A general model-based framework is proposed to unify registration and estimation of curves and their structures. In particular, this work focuses on three specific research problems. First, a Sparse Semiparametric Nonlinear Model (SSNM) is proposed to jointly register curves, perform model selection, and estimate the features of sparsely-structured functional data. The SSNM is fitted to chromatographic data from a study of the composition of Chinese rhubarb. Next, the SSNM is extended to the nonlinear mixed effects setting to enable the comparison of sparse structures across group-averaged curves. The model is utilized to compare compositions of medicinal herbs collected from two groups of production sites. Finally, a Piecewise Monotonic B-spline Model (PMBM) is proposed to estimate the locations of local extrema in a curve. The PMBM is applied to MRI data from a study of gray matter growth in the brain

    Nonparametric Methods in Astronomy: Think, Regress, Observe -- Pick Any Three

    Get PDF
    Telescopes are much more expensive than astronomers, so it is essential to minimize required sample sizes by using the most data-efficient statistical methods possible. However, the most commonly used model-independent techniques for finding the relationship between two variables in astronomy are flawed. In the worst case they can lead without warning to subtly yet catastrophically wrong results, and even in the best case they require more data than necessary. Unfortunately, there is no single best technique for nonparametric regression. Instead, we provide a guide for how astronomers can choose the best method for their specific problem and provide a python library with both wrappers for the most useful existing algorithms and implementations of two new algorithms developed here.Comment: 19 pages, PAS

    Stable computational methods for additive binomial models with application to adjusted risk differences

    Get PDF
    Risk difference is an important measure of effect size in biostatistics, for both randomised and observational studies. The natural way to adjust risk differences for potential confounders is to use an additive binomial model, which is a binomial generalised linear model with an identity link function. However, implementations of the additive binomial model in commonly used statistical packages can fail to converge to the maximum likelihood estimate (MLE), necessitating the use of approximate methods involving misspecified or inflexible models. A novel computational method is proposed, which retains the additive binomial model but uses the multinomial–Poisson transformation to convert the problem into an equivalent additive Poisson fit. The method allows reliable computation of the MLE, as well as allowing for semi-parametric monotonic regression functions. The performance of the method is examined in simulations and it is used to analyse two datasets from clinical trials in acute myocardial infarction. Source code for implementing the method in R is provided as supplementary material (see Appendix A).Australian Research Counci

    Shape-constrained Estimation of Value Functions

    Full text link
    We present a fully nonparametric method to estimate the value function, via simulation, in the context of expected infinite-horizon discounted rewards for Markov chains. Estimating such value functions plays an important role in approximate dynamic programming and applied probability in general. We incorporate "soft information" into the estimation algorithm, such as knowledge of convexity, monotonicity, or Lipchitz constants. In the presence of such information, a nonparametric estimator for the value function can be computed that is provably consistent as the simulated time horizon tends to infinity. As an application, we implement our method on price tolling agreement contracts in energy markets

    Change-point Problem and Regression: An Annotated Bibliography

    Get PDF
    The problems of identifying changes at unknown times and of estimating the location of changes in stochastic processes are referred to as the change-point problem or, in the Eastern literature, as disorder . The change-point problem, first introduced in the quality control context, has since developed into a fundamental problem in the areas of statistical control theory, stationarity of a stochastic process, estimation of the current position of a time series, testing and estimation of change in the patterns of a regression model, and most recently in the comparison and matching of DNA sequences in microarray data analysis. Numerous methodological approaches have been implemented in examining change-point models. Maximum-likelihood estimation, Bayesian estimation, isotonic regression, piecewise regression, quasi-likelihood and non-parametric regression are among the methods which have been applied to resolving challenges in change-point problems. Grid-searching approaches have also been used to examine the change-point problem. Statistical analysis of change-point problems depends on the method of data collection. If the data collection is ongoing until some random time, then the appropriate statistical procedure is called sequential. If, however, a large finite set of data is collected with the purpose of determining if at least one change-point occurred, then this may be referred to as non-sequential. Not surprisingly, both the former and the latter have a rich literature with much of the earlier work focusing on sequential methods inspired by applications in quality control for industrial processes. In the regression literature, the change-point model is also referred to as two- or multiple-phase regression, switching regression, segmented regression, two-stage least squares (Shaban, 1980), or broken-line regression. The area of the change-point problem has been the subject of intensive research in the past half-century. The subject has evolved considerably and found applications in many different areas. It seems rather impossible to summarize all of the research carried out over the past 50 years on the change-point problem. We have therefore confined ourselves to those articles on change-point problems which pertain to regression. The important branch of sequential procedures in change-point problems has been left out entirely. We refer the readers to the seminal review papers by Lai (1995, 2001). The so called structural change models, which occupy a considerable portion of the research in the area of change-point, particularly among econometricians, have not been fully considered. We refer the reader to Perron (2005) for an updated review in this area. Articles on change-point in time series are considered only if the methodologies presented in the paper pertain to regression analysis

    Adaptive spline fitting with particle swarm optimization

    Get PDF
    In fitting data with a spline, finding the optimal placement of knots can significantly improve the quality of the fit. However, the challenging high-dimensional and non-convex optimization problem associated with completely free knot placement has been a major roadblock in using this approach. We present a method that uses particle swarm optimization (PSO) combined with model selection to address this challenge. The problem of overfitting due to knot clustering that accompanies free knot placement is mitigated in this method by explicit regularization, resulting in a significantly improved performance on highly noisy data. The principal design choices available in the method are delineated and a statistically rigorous study of their effect on performance is carried out using simulated data and a wide variety of benchmark functions. Our results demonstrate that PSO-based free knot placement leads to a viable and flexible adaptive spline fitting approach that allows the fitting of both smooth and non-smooth functions.Comment: Accepted version; Typo corrected in equation 3; Minor changes to tex

    Adaptive spline fitting with particle swarm optimization

    Get PDF
    In fitting data with a spline, finding the optimal placement of knots can significantly improve the quality of the fit. However, the challenging high-dimensional and non-convex optimization problem associated with completely free knot placement has been a major roadblock in using this approach. We present a method that uses particle swarm optimization (PSO) combined with model selection to address this challenge. The problem of overfitting due to knot clustering that accompanies free knot placement is mitigated in this method by explicit regularization, resulting in a significantly improved performance on highly noisy data. The principal design choices available in the method are delineated and a statistically rigorous study of their effect on performance is carried out using simulated data and a wide variety of benchmark functions. Our results demonstrate that PSO-based free knot placement leads to a viable and flexible adaptive spline fitting approach that allows the fitting of both smooth and non-smooth functions
    • …
    corecore