1,376 research outputs found

    Functional data analytic approach of modeling ECG T-wave shape to measure cardiovascular behavior

    Full text link
    The T-wave of an electrocardiogram (ECG) represents the ventricular repolarization that is critical in restoration of the heart muscle to a pre-contractile state prior to the next beat. Alterations in the T-wave reflect various cardiac conditions; and links between abnormal (prolonged) ventricular repolarization and malignant arrhythmias have been documented. Cardiac safety testing prior to approval of any new drug currently relies on two points of the ECG waveform: onset of the Q-wave and termination of the T-wave; and only a few beats are measured. Using functional data analysis, a statistical approach extracts a common shape for each subject (reference curve) from a sequence of beats, and then models the deviation of each curve in the sequence from that reference curve as a four-dimensional vector. The representation can be used to distinguish differences between beats or to model shape changes in a subject's T-wave over time. This model provides physically interpretable parameters characterizing T-wave shape, and is robust to the determination of the endpoint of the T-wave. Thus, this dimension reduction methodology offers the strong potential for definition of more robust and more informative biomarkers of cardiac abnormalities than the QT (or QT corrected) interval in current use.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS273 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Data compression and regression based on local principal curves.

    Get PDF
    Frequently the predictor space of a multivariate regression problem of the type y = m(x_1, …, x_p ) + ε is intrinsically one-dimensional, or at least of far lower dimension than p. Usual modeling attempts such as the additive model y = m_1(x_1) + … + m_p (x_p ) + ε, which try to reduce the complexity of the regression problem by making additional structural assumptions, are then inefficient as they ignore the inherent structure of the predictor space and involve complicated model and variable selection stages. In a fundamentally different approach, one may consider first approximating the predictor space by a (usually nonlinear) curve passing through it, and then regressing the response only against the one-dimensional projections onto this curve. This entails the reduction from a p- to a one-dimensional regression problem. As a tool for the compression of the predictor space we apply local principal curves. Taking things on from the results presented in Einbeck et al. (Classification – The Ubiquitous Challenge. Springer, Heidelberg, 2005, pp. 256–263), we show how local principal curves can be parametrized and how the projections are obtained. The regression step can then be carried out using any nonparametric smoother. We illustrate the technique using data from the physical sciences

    Two curve Chebyshev approximation and its application to signal clustering

    Get PDF
    In this paper we extend a number of important results of the classical Chebyshev approximation theory to the case of simultaneous approximation of two or more functions. The need for this extension is application driven, since such kind of problems appears in the area of curve (signal) clustering. In this paper we propose a new efficient algorithm for signal clustering and develop a procedure that allows one to reuse the results obtained at the previous iteration without recomputing the cluster centres from scratch. This approach is based on the extension of the classical de la Vallee-Poussin's procedure originally developed for polynomial approximation. In this paper, we also develop necessary and sufficient optimality conditions for two curve Chebyshev approximation, that is our core tool for curve clustering. These results are based on application of nonsmooth convex analysis

    A binary level set method based on k-Means for contour tracking on skin cancer images

    Full text link
    A great challenge of research and development activities have recently highlighted in segmenting of the skin cancer images. This paper presents a novel algorithm to improve the segmentation results of level set algorithm with skin cancer images. The major contribution of presented algorithm is to simplify skin cancer images for the computer aided object analysis without loss of significant information and to decrease the required computational cost. The presented algorithm uses k-means clustering technique and explores primitive segmentation to get initial label estimation for level set algorithm. The proposed segmentation method provides better segmentation results as compared to standard level set segmentation technique and modified fuzzy cmeans clustering technique

    Data compression and regression based on local principal curves

    Get PDF
    Frequently the predictor space of a multivariate regression problem of the type y = m(x_1, …, x_p ) + ε is intrinsically one-dimensional, or at least of far lower dimension than p. Usual modeling attempts such as the additive model y = m_1(x_1) + … + m_p (x_p ) + ε, which try to reduce the complexity of the regression problem by making additional structural assumptions, are then inefficient as they ignore the inherent structure of the predictor space and involve complicated model and variable selection stages. In a fundamentally different approach, one may consider first approximating the predictor space by a (usually nonlinear) curve passing through it, and then regressing the response only against the one-dimensional projections onto this curve. This entails the reduction from a p- to a one-dimensional regression problem. As a tool for the compression of the predictor space we apply local principal curves. Taking things on from the results presented in Einbeck et al. (Classification – The Ubiquitous Challenge. Springer, Heidelberg, 2005, pp. 256–263), we show how local principal curves can be parametrized and how the projections are obtained. The regression step can then be carried out using any nonparametric smoother. We illustrate the technique using data from the physical sciences

    A new family of high-resolution multivariate spectral estimators

    Full text link
    In this paper, we extend the Beta divergence family to multivariate power spectral densities. Similarly to the scalar case, we show that it smoothly connects the multivariate Kullback-Leibler divergence with the multivariate Itakura-Saito distance. We successively study a spectrum approximation problem, based on the Beta divergence family, which is related to a multivariate extension of the THREE spectral estimation technique. It is then possible to characterize a family of solutions to the problem. An upper bound on the complexity of these solutions will also be provided. Simulations suggest that the most suitable solution of this family depends on the specific features required from the estimation problem
    • …
    corecore