141,007 research outputs found

    Detecting the number of components in a non-stationary signal using the Rényi entropy of its time-frequency distributions

    Get PDF
    A time-frequency distribution provides many advantages in the analysis of multicomponent non-stationary signals. The simultaneous signal representation with respect to the time and frequency axis defines the signal amplitude, frequency, bandwidth, and the number of components at each time moment. The Rényi entropy, applied to a time-frequency distribution, is shown to be a valuable indicator of the signal complexity. The aim of this paper is to determine which of the treated time-frequency distributions (TFDs) (namely, the Wigner-Ville distribution, the Choi-Williams distribution, and the spectrogram) has the best properties for estimation of the number of components when there is no prior knowledge of the signal. The optimal Rényi entropy parameter α is determined for each TFD. Accordingly, the effects of different time durations, bandwidths and amplitudes of the signal components on the Rényi entropy have been analysed. The concept of a class, when the Rényi entropy is applied to TFDs, is also introduced

    Sparse PCA: Optimal rates and adaptive estimation

    Get PDF
    Principal component analysis (PCA) is one of the most commonly used statistical procedures with a wide range of applications. This paper considers both minimax and adaptive estimation of the principal subspace in the high dimensional setting. Under mild technical conditions, we first establish the optimal rates of convergence for estimating the principal subspace which are sharp with respect to all the parameters, thus providing a complete characterization of the difficulty of the estimation problem in term of the convergence rate. The lower bound is obtained by calculating the local metric entropy and an application of Fano's lemma. The rate optimal estimator is constructed using aggregation, which, however, might not be computationally feasible. We then introduce an adaptive procedure for estimating the principal subspace which is fully data driven and can be computed efficiently. It is shown that the estimator attains the optimal rates of convergence simultaneously over a large collection of the parameter spaces. A key idea in our construction is a reduction scheme which reduces the sparse PCA problem to a high-dimensional multivariate regression problem. This method is potentially also useful for other related problems.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1178 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    ANALYSIS OF SUBTLE CHANGES IN BIOMEDICAL SIGNALS BASED ON ENTROPY PHASE PORTRAIT

    Get PDF
    A new method for evaluating subtle changes in biomedical signals, caused by external influences on the human organism, is proposed. The method is based on the analysis of chaoticness of the studied parameter, which is calculated in a sliding window along an array of observed values using different entropy estimations. A distinctive feature of the method is the transition from the calculated entropies to their mapping on the phase plane and estimation of the integral parameters of the obtained graphic image (the entropy phase portrait), in particular, the area of the convex hull.The diagnostic value of the proposed approach in the processing of real clinical data was demonstrated, obtained under conditions of increasing physical activity, coronary artery bypass surgery and intravenous drip infusion

    An information theoretic learning framework based on Renyi’s α entropy for brain effective connectivity estimation

    Get PDF
    The interactions among neural populations distributed across different brain regions are at the core of cognitive and perceptual processing. Therefore, the ability of studying the flow of information within networks of connected neural assemblies is of fundamental importance to understand such processes. In that regard, brain connectivity measures constitute a valuable tool in neuroscience. They allow assessing functional interactions among brain regions through directed or non-directed statistical dependencies estimated from neural time series. Transfer entropy (TE) is one such measure. It is an effective connectivity estimation approach based on information theory concepts and statistical causality premises. It has gained increasing attention in the literature because it can capture purely nonlinear directed interactions, and is model free. That is to say, it does not require an initial hypothesis about the interactions present in the data. These properties make it an especially convenient tool in exploratory analyses. However, like any information-theoretic quantity, TE is defined in terms of probability distributions that in practice need to be estimated from data. A challenging task, whose outcome can significantly affect the results of TE. Also, it lacks a standard spectral representation, so it cannot reveal the local frequency band characteristics of the interactions it detects.Las interacciones entre poblaciones neuronales distribuidas en diferentes regiones del cerebro son el núcleo del procesamiento cognitivo y perceptivo. Por lo tanto, la capacidad de estudiar el flujo de información dentro de redes de conjuntos neuronales conectados es de fundamental importancia para comprender dichos procesos. En ese sentido, las medidas de conectividad cerebral constituyen una valiosa herramienta en neurociencia. Permiten evaluar interacciones funcionales entre regiones cerebrales a través de dependencias estadísticas dirigidas o no dirigidas estimadas a partir de series de tiempo. La transferencia de entropía (TE) es una de esas medidas. Es un enfoque de estimación de conectividad efectiva basada en conceptos de teoría de la información y premisas de causalidad estadística. Ha ganado una atención cada vez mayor en la literatura porque puede capturar interacciones dirigidas puramente no lineales y no depende de un modelo. Es decir, no requiere de una hipótesis inicial sobre las interacciones presentes en los datos. Estas propiedades la convierten en una herramienta especialmente conveniente en análisis exploratorios. Sin embargo, como cualquier concepto basado en teoría de la información, la TE se define en términos de distribuciones de probabilidad que en la práctica deben estimarse a partir de datos. Una tarea desafiante, cuyo resultado puede afectar significativamente los resultados de la TE. Además, carece de una representación espectral estándar, por lo que no puede revelar las características de banda de frecuencia local de las interacciones que detecta.DoctoradoDoctor(a) en IngenieríaContents List of Figures xi List of Tables xv Notation xvi 1 Preliminaries 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Probability distribution estimation as an intermediate step in TE computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 The lack of a spectral representation for TE . . . . . . . . . . . . 7 1.3 Theoretical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.1 Transfer entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.2 Granger causality . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.3 Information theoretic learning from kernel matrices . . . . . . . . 12 1.4 Literature review on transfer entropy estimation . . . . . . . . . . . . . . 14 1.4.1 Transfer entropy in the frequency domain . . . . . . . . . . . . . . 17 1.5 Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.1 General aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.2 Specific aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.6 Outline and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.6.1 Kernel-based Renyi’s transfer entropy . . . . . . . . . . . . . . . . 24 1.6.2 Kernel-based Renyi’s phase transfer entropy . . . . . . . . . . . . 24 1.6.3 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions . . . . . . . . . . . . . . . . 25 1.7 EEG databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Contents ix 1.7.1 Motor imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.7.2 Working memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.8 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2 Kernel-based Renyi’s transfer entropy 34 2.1 Kernel-based Renyi’s transfer entropy . . . . . . . . . . . . . . . . . . . . 35 2.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.1 VAR model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.2 Modified linear Kus model . . . . . . . . . . . . . . . . . . . . . . 38 2.2.3 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.2.4 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.1 VAR model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.2 Modified linear Kus model . . . . . . . . . . . . . . . . . . . . . . 46 2.3.3 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.3.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3 Kernel-based Renyi’s phase transfer entropy 60 3.1 Kernel-based Renyi’s phase transfer entropy . . . . . . . . . . . . . . . . 61 3.1.1 Phase-based effective connectivity estimation approaches considered in this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2.1 Neural mass models . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.2.3 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.1 Neural mass models . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions 84 4.1 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions . . . . . . . . . . . . . . . . . . . . . . . . . 85 x Contents 4.1.1 Transfer entropy for directed phase-amplitude interactions . . . . 85 4.1.2 Cross-frequency directionality . . . . . . . . . . . . . . . . . . . . 85 4.1.3 Phase transfer entropy and directed phase-amplitude interactions 86 4.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.2.1 Simulated phase-amplitude interactions . . . . . . . . . . . . . . . 88 4.2.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.3 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.3.1 Simulated phase-amplitude interactions . . . . . . . . . . . . . . . 92 4.3.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5 Final Remarks 100 5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.3 Academic products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3.1 Journal papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3.2 Conference papers . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.3.3 Conference presentations . . . . . . . . . . . . . . . . . . . . . . . 105 Appendix A Kernel methods and Renyi’s entropy estimation 106 A.1 Reproducing kernel Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . 106 A.1.1 Reproducing kernels . . . . . . . . . . . . . . . . . . . . . . . . . 106 A.1.2 Kernel-based learning . . . . . . . . . . . . . . . . . . . . . . . . . 107 A.2 Kernel-based estimation of Renyi’s entropy . . . . . . . . . . . . . . . . . 109 Appendix B Surface Laplacian 113 Appendix C Permutation testing 115 Appendix D Kernel-based relevance analysis 117 Appendix E Cao’s criterion 120 Appendix F Neural mass model equations 122 References 12
    corecore