390 research outputs found

    An analysis of life expectancy and economic production using expectile frontier zones

    Get PDF
    The wealth of a country is assumed to have a strong non-linear influence on the life expectancy of its inhabitants. We follow up on research by Preston and study the relationship with gross domestic product. Smooth curves for the average but also for (upper) frontiers are constructed by a combination of least asymmetrically weighted squares and P-splines. Guidelines are given for optimizing the amount of smoothing and the definition of frontiers. The model is applied to a large set of countries in different years. It is also used to estimate life expectancy performance for individual countries and to show how it changed over time.frontier estimation, gross domestic product, least asymmetrically weighted squares, life expectancy, production frontier, smoothing

    League tables for literacy survey data based on random effect models.

    Get PDF
    Data from the International Adult Literacy Survey are used to illustrate how league tables can be obtained from summary data, consisting of percentages and their standard errors, using random effects models estimated by nonparametric maximum likelihood

    Varying Coefficient Tensor Models for Brain Imaging

    Get PDF
    We revisit a multidimensional varying-coefficient model (VCM), by allowing regressor coefficients to vary smoothly in more than one dimension, thereby extending the VCM of Hastie and Tibshirani. The motivating example is 3-dimensional, involving a special type of nuclear magnetic resonance measurement technique that is being used to estimate the diffusion tensor at each point in the human brain. We aim to improve the current state of the art, which is to apply a multiple regression model for each voxel separately using information from six or more volume images. We present a model, based on P-spline tensor products, to introduce spatial smoothness of the estimated diffusion tensor. Since the regression design matrix is space-invariant, a 4-dimensional tensor product model results, allowing more efficient computation with penalized array regression

    Space-Varying Coefficient Models for Brain Imaging

    Get PDF
    The methodological development and the application in this paper originate from diffusion tensor imaging (DTI), a powerful nuclear magnetic resonance technique enabling diagnosis and monitoring of several diseases as well as reconstruction of neural pathways. We reformulate the current analysis framework of separate voxelwise regressions as a 3d space-varying coefficient model (VCM) for the entire set of DTI images recorded on a 3d grid of voxels. Hence by allowing to borrow strength from spatially adjacent voxels, to smooth noisy observations, and to estimate diffusion tensors at any location within the brain, the three-step cascade of standard data processing is overcome simultaneously. We conceptualize two VCM variants based on B-spline basis functions: a full tensor product approach and a sequential approximation, rendering the VCM numerically and computationally feasible even for the huge dimension of the joint model in a realistic setup. A simulation study shows that both approaches outperform the standard method of voxelwise regressions with subsequent regularization. Due to major efficacy, we apply the sequential method to a clinical DTI data set and demonstrate the inherent ability of increasing the rigid grid resolution by evaluating the incorporated basis functions at intermediate points. In conclusion, the suggested fitting methods clearly improve the current state-of-the-art, but ameloriation of local adaptivity remains desirable

    Improved Dynamic Predictions from Joint Models of Longitudinal and Survival Data with Time-Varying Effects using P-splines

    Get PDF
    In the field of cardio-thoracic surgery, valve function is monitored over time after surgery. The motivation for our research comes from a study which includes patients who received a human tissue valve in the aortic position. These patients are followed prospectively over time by standardized echocardiographic assessment of valve function. Loss of follow-up could be caused by valve intervention or the death of the patient. One of the main characteristics of the human valve is that its durability is limited. Therefore, it is of interest to obtain a prognostic model in order for the physicians to scan trends in valve function over time and plan their next intervention, accounting for the characteristics of the data. Several authors have focused on deriving predictions under the standard joint modeling of longitudinal and survival data framework that assumes a constant effect for the coefficient that links the longitudinal and survival outcomes. However, in our case this may be a restrictive assumption. Since the valve degenerates, the association between the biomarker with survival may change over time. To improve dynamic predictions we propose a Bayesian joint model that allows a time-varying coefficient to link the longitudinal and the survival processes, using P-splines. We evaluate the performance of the model in terms of discrimination and calibration, while accounting for censoring

    League tables for literacy survey data based on random effect models

    Get PDF
    Data from the International Adult Literacy Survey are used to illustrate how league tables can be obtained from summary data, consisting of percentages and their standard errors, using random effects models estimated by nonparametric maximum likelihood

    Twenty years of P-splines

    Get PDF
    P-splines first appeared in the limelight twenty years ago. Since then they have become popular in applications and in theoretical work. The combination of a rich B-spline basis and a simple difference penalty lends itself well to a variety of generalizations, because it is based on regression. In effect, P-splines allow the building of a “backbone” for the “mixing and matching” of a variety of additive smooth structure components, while inviting all sorts of extensions: varying-coefficient effects, signal (functional) regressors, two-dimensional surfaces, non-normal responses, quantile (expectile) modelling, among others. Strong connections with mixed models and Bayesian analysis have been established. We give an overview of many of the central developments during the first two decades of P-splines.Peer Reviewe

    Twenty years of P-splines

    Get PDF
    P-splines first appeared in the limelight twenty years ago. Since then they have become popular in applications and in theoretical work. The combination of a rich B-spline basis and a simple difference penalty lends itself well to a variety of generalizations, because it is based on regression. In effect, P-splines allow the building of a “backbone” for the “mixing and matching” of a variety of additive smooth structure components, while inviting all sorts of extensions: varying-coefficient effects, signal (functional) regressors, two-dimensional surfaces, non-normal responses, quantile (expectile) modelling, among others. Strong connections with mixed models and Bayesian analysis have been established. We give an overview of many of the central developments during the first two decades of P-splines

    Reliable Single Chip Genotyping with Semi-Parametric Log-Concave Mixtures

    Get PDF
    The common approach to SNP genotyping is to use (model-based) clustering per individual SNP, on a set of arrays. Genotyping all SNPs on a single array is much more attractive, in terms of flexibility, stability and applicability, when developing new chips. A new semi-parametric method, named SCALA, is proposed. It is based on a mixture model using semi-parametric log-concave densities. Instead of using the raw data, the mixture is fitted on a two-dimensional histogram, thereby making computation time almost independent of the number of SNPs. Furthermore, the algorithm is effective in low-MAF situations. Comparisons between SCALA and CRLMM on HapMap genotypes show very reliable calling of single arrays. Some heterozygous genotypes from HapMap are called homozygous by SCALA and to lesser extent by CRLMM too. Furthermore, HapMap's NoCalls (NN) could be genotyped by SCALA, mostly with high probability. The software is available as R scripts from the website www.math.leidenuniv.nl/~rrippe

    Modelling trends in digit preference patterns

    Get PDF
    Digit preference is the habit of reporting certain end digits more often than others. If such a misreporting pattern is a concern, then measures to reduce digit preference can be taken and monitoring changes in digit preference becomes important. We propose a two-dimensional penalized composite link model to estimate the true distributions unaffected by misreporting, the digit preference pattern and a trend in the preference pattern simultaneously. A transfer pattern is superimposed on a series of smooth latent distributions and is modulated along a second dimension. Smoothness of the latent distributions is enforced by a roughness penalty. Ridge regression with an L1-penalty is used to extract the misreporting pattern, and an additional weighted least squares regression estimates the modulating trend vector. Smoothing parameters are selected by the Akaike information criterion. We present a simulation study and apply the model to data on birth weight and on self-reported weight of adults
    • 

    corecore