15 research outputs found

    A Bi-Variate Kaplan-Meier Estimator Via An Integral Equation

    Get PDF

    A New Smooth Density Estimator for Non-Negative Random Variables

    Get PDF
    Commonly used kernel density estimators may not provide admissible values of the density or its functionals at the boundaries for densities with restricted support. For smoothing the empirical distribution a generalization of the Hille's lemma, considered here, alleviates some of the problems of kernel density estimator near the boundaries. For nonnegative random variables which crop up in reliability and survival analysis, the proposed procedure is thoroughly explored; its consistency and asymptotic distributional results are established under appropriate regularity assumptions. Methods of obtaining smoothing parameters through cross-validation are given, and graphical illustrations of the estimator for continuous (at zero) as well as discontinuous densities are provided

    Boundary behavior in High Dimension, Low Sample Size asymptotics of PCA

    Get PDF
    In High Dimension, Low Sample Size (HDLSS) data situations, where the dimension d is much larger than the sample size n, principal component analysis (PCA) plays an important role in statistical analysis. Under which conditions does the sample PCA well reflect the population covariance structure? We answer this question in a relevant asymptotic context where d grows and n is fixed, under a generalized spiked covariance model. Specifically, we assume the largest population eigenvalues to be of the order dα, where α1. Earlier results show the conditions for consistency and strong inconsistency of eigenvectors of the sample covariance matrix. In the boundary case, α=1, where the sample PC directions are neither consistent nor strongly inconsistent, we show that eigenvalues and eigenvectors do not degenerate but have limiting distributions. The result smoothly bridges the phase transition represented by the other two cases, and thus gives a spectrum of limits for the sample PCA in the HDLSS asymptotics. While the results hold under a general situation, the limiting distributions under Gaussian assumption are illustrated in greater detail. In addition, the geometric representation of HDLSS data is extended to give three different representations, that depend on the magnitude of variances in the first few principal components

    Boundary behavior in High Dimension, Low Sample Size asymptotics of PCA

    Get PDF
    In High Dimension, Low Sample Size (HDLSS) data situations, where the dimension d is much larger than the sample size n, principal component analysis (PCA) plays an important role in statistical analysis. Under which conditions does the sample PCA well reflect the population covariance structure? We answer this question in a relevant asymptotic context where d grows and n is fixed, under a generalized spiked covariance model. Specifically, we assume the largest population eigenvalues to be of the order dα, where α1. Earlier results show the conditions for consistency and strong inconsistency of eigenvectors of the sample covariance matrix. In the boundary case, α=1, where the sample PC directions are neither consistent nor strongly inconsistent, we show that eigenvalues and eigenvectors do not degenerate but have limiting distributions. The result smoothly bridges the phase transition represented by the other two cases, and thus gives a spectrum of limits for the sample PCA in the HDLSS asymptotics. While the results hold under a general situation, the limiting distributions under Gaussian assumption are illustrated in greater detail. In addition, the geometric representation of HDLSS data is extended to give three different representations, that depend on the magnitude of variances in the first few principal components

    Generalised kernel smoothing for non-negative stationary ergodic processes

    Get PDF
    In this paper, we consider a generalised kernel smoothing estimator of the regression function with nonnegative support, using gamma probability densities as kernels, which are non-negative and have naturally varying shapes. It is based on a generalisation of Hille’s lemma and a perturbation idea that allows us to deal with the problem at the boundary. Its uniform consistency and asymptotic normality are obtained at interior and boundary points, under a stationary ergodic process assumption, without using traditional mixing conditions. The asymptotic mean squared error of the estimator is derived and the optimal value of smoothing parameter is also discussed. Graphical illustrations of the proposed estimator are provided for simulated as well as for real data. A simulation study is also carried out to compare our method with the competing local linear method

    A Smooth Estimator of Regression Function for Non-Negative Dependent Random Variables

    Get PDF
    Commonly used kernel regression estimators may not provide admissible values of the regression function or its functionals at the boundaries, for regressions with restricted support. Any smoothing method will become less accurate near the boundary of the observation interval because fewer observations can be averaged, and thus variance or bias can be affected. Here, we adapt Chaubey et al. (2007)'s method of density estimation for nonnegative random variables to define a smooth estimator of the regression function. The estimator is based on a generalization of Hille's lemma and a perturbation idea. Its uniform consistency and asymptotic normality are obtained, for the sake of generality, under a stationary ergodic process assumption for the data . The asymptotic mean squared error is derived and the optimal value of smoothing parameter is also discussed. Graphical illustration of the proposed estimator are provided on simulated as well as real-life data

    The strong law of large numbers for kaplan-meier U-statistics

    No full text
    We introduce a Kaplan-Meier U-statistic of degree two for randomly censored data and prove a strong law for it. We use the technique of Stute and Wang by identifying appropriate reverse-time supermartingale processes. This approach avoids the stringent assumptions of Gijbels and Veraverbeke who consider similar functionals
    corecore