85,339 research outputs found

    q-Gaussian based Smoothed Functional Algorithm for Stochastic Optimization

    Full text link
    The q-Gaussian distribution results from maximizing certain generalizations of Shannon entropy under some constraints. The importance of q-Gaussian distributions stems from the fact that they exhibit power-law behavior, and also generalize Gaussian distributions. In this paper, we propose a Smoothed Functional (SF) scheme for gradient estimation using q-Gaussian distribution, and also propose an algorithm for optimization based on the above scheme. Convergence results of the algorithm are presented. Performance of the proposed algorithm is shown by simulation results on a queuing model.Comment: 5 pages, 1 figur

    Entropy-difference based stereo error detection

    Full text link
    Stereo depth estimation is error-prone; hence, effective error detection methods are desirable. Most such existing methods depend on characteristics of the stereo matching cost curve, making them unduly dependent on functional details of the matching algorithm. As a remedy, we propose a novel error detection approach based solely on the input image and its depth map. Our assumption is that, entropy of any point on an image will be significantly higher than the entropy of its corresponding point on the image's depth map. In this paper, we propose a confidence measure, Entropy-Difference (ED) for stereo depth estimates and a binary classification method to identify incorrect depths. Experiments on the Middlebury dataset show the effectiveness of our method. Our proposed stereo confidence measure outperforms 17 existing measures in all aspects except occlusion detection. Established metrics such as precision, accuracy, recall, and area-under-curve are used to demonstrate the effectiveness of our method

    TRENTOOL : an open source toolbox to estimate neural directed interactions with transfer entropy

    Get PDF
    To investigate directed interactions in neural networks we often use Norbert Wiener's famous definition of observational causality. Wiener’s definition states that an improvement of the prediction of the future of a time series X from its own past by the incorporation of information from the past of a second time series Y is seen as an indication of a causal interaction from Y to X. Early implementations of Wiener's principle – such as Granger causality – modelled interacting systems by linear autoregressive processes and the interactions themselves were also assumed to be linear. However, in complex systems – such as the brain – nonlinear behaviour of its parts and nonlinear interactions between them have to be expected. In fact nonlinear power-to-power or phase-to-power interactions between frequencies are reported frequently. To cover all types of non-linear interactions in the brain, and thereby to fully chart the neural networks of interest, it is useful to implement Wiener's principle in a way that is free of a model of the interaction [1]. Indeed, it is possible to reformulate Wiener's principle based on information theoretic quantities to obtain the desired model-freeness. The resulting measure was originally formulated by Schreiber [2] and termed transfer entropy (TE). Shortly after its publication transfer entropy found applications to neurophysiological data. With the introduction of new, data efficient estimators (e.g. [3]) TE has experienced a rapid surge of interest (e.g. [4]). Applications of TE in neuroscience range from recordings in cultured neuronal populations to functional magnetic resonanace imaging (fMRI) signals. Despite widespread interest in TE, no publicly available toolbox exists that guides the user through the difficulties of this powerful technique. TRENTOOL (the TRansfer ENtropy TOOLbox) fills this gap for the neurosciences by bundling data efficient estimation algorithms with the necessary parameter estimation routines and nonparametric statistical testing procedures for comparison to surrogate data or between experimental conditions. TRENTOOL is an open source MATLAB toolbox based on the Fieldtrip data format. ..

    An information theoretic learning framework based on Renyi’s α entropy for brain effective connectivity estimation

    Get PDF
    The interactions among neural populations distributed across different brain regions are at the core of cognitive and perceptual processing. Therefore, the ability of studying the flow of information within networks of connected neural assemblies is of fundamental importance to understand such processes. In that regard, brain connectivity measures constitute a valuable tool in neuroscience. They allow assessing functional interactions among brain regions through directed or non-directed statistical dependencies estimated from neural time series. Transfer entropy (TE) is one such measure. It is an effective connectivity estimation approach based on information theory concepts and statistical causality premises. It has gained increasing attention in the literature because it can capture purely nonlinear directed interactions, and is model free. That is to say, it does not require an initial hypothesis about the interactions present in the data. These properties make it an especially convenient tool in exploratory analyses. However, like any information-theoretic quantity, TE is defined in terms of probability distributions that in practice need to be estimated from data. A challenging task, whose outcome can significantly affect the results of TE. Also, it lacks a standard spectral representation, so it cannot reveal the local frequency band characteristics of the interactions it detects.Las interacciones entre poblaciones neuronales distribuidas en diferentes regiones del cerebro son el núcleo del procesamiento cognitivo y perceptivo. Por lo tanto, la capacidad de estudiar el flujo de información dentro de redes de conjuntos neuronales conectados es de fundamental importancia para comprender dichos procesos. En ese sentido, las medidas de conectividad cerebral constituyen una valiosa herramienta en neurociencia. Permiten evaluar interacciones funcionales entre regiones cerebrales a través de dependencias estadísticas dirigidas o no dirigidas estimadas a partir de series de tiempo. La transferencia de entropía (TE) es una de esas medidas. Es un enfoque de estimación de conectividad efectiva basada en conceptos de teoría de la información y premisas de causalidad estadística. Ha ganado una atención cada vez mayor en la literatura porque puede capturar interacciones dirigidas puramente no lineales y no depende de un modelo. Es decir, no requiere de una hipótesis inicial sobre las interacciones presentes en los datos. Estas propiedades la convierten en una herramienta especialmente conveniente en análisis exploratorios. Sin embargo, como cualquier concepto basado en teoría de la información, la TE se define en términos de distribuciones de probabilidad que en la práctica deben estimarse a partir de datos. Una tarea desafiante, cuyo resultado puede afectar significativamente los resultados de la TE. Además, carece de una representación espectral estándar, por lo que no puede revelar las características de banda de frecuencia local de las interacciones que detecta.DoctoradoDoctor(a) en IngenieríaContents List of Figures xi List of Tables xv Notation xvi 1 Preliminaries 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Probability distribution estimation as an intermediate step in TE computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 The lack of a spectral representation for TE . . . . . . . . . . . . 7 1.3 Theoretical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.1 Transfer entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.2 Granger causality . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.3 Information theoretic learning from kernel matrices . . . . . . . . 12 1.4 Literature review on transfer entropy estimation . . . . . . . . . . . . . . 14 1.4.1 Transfer entropy in the frequency domain . . . . . . . . . . . . . . 17 1.5 Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.1 General aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.2 Specific aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.6 Outline and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.6.1 Kernel-based Renyi’s transfer entropy . . . . . . . . . . . . . . . . 24 1.6.2 Kernel-based Renyi’s phase transfer entropy . . . . . . . . . . . . 24 1.6.3 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions . . . . . . . . . . . . . . . . 25 1.7 EEG databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Contents ix 1.7.1 Motor imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.7.2 Working memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.8 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2 Kernel-based Renyi’s transfer entropy 34 2.1 Kernel-based Renyi’s transfer entropy . . . . . . . . . . . . . . . . . . . . 35 2.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.1 VAR model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.2 Modified linear Kus model . . . . . . . . . . . . . . . . . . . . . . 38 2.2.3 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.2.4 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.1 VAR model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.2 Modified linear Kus model . . . . . . . . . . . . . . . . . . . . . . 46 2.3.3 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.3.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3 Kernel-based Renyi’s phase transfer entropy 60 3.1 Kernel-based Renyi’s phase transfer entropy . . . . . . . . . . . . . . . . 61 3.1.1 Phase-based effective connectivity estimation approaches considered in this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2.1 Neural mass models . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.2.3 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.1 Neural mass models . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions 84 4.1 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions . . . . . . . . . . . . . . . . . . . . . . . . . 85 x Contents 4.1.1 Transfer entropy for directed phase-amplitude interactions . . . . 85 4.1.2 Cross-frequency directionality . . . . . . . . . . . . . . . . . . . . 85 4.1.3 Phase transfer entropy and directed phase-amplitude interactions 86 4.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.2.1 Simulated phase-amplitude interactions . . . . . . . . . . . . . . . 88 4.2.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.3 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.3.1 Simulated phase-amplitude interactions . . . . . . . . . . . . . . . 92 4.3.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5 Final Remarks 100 5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.3 Academic products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3.1 Journal papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3.2 Conference papers . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.3.3 Conference presentations . . . . . . . . . . . . . . . . . . . . . . . 105 Appendix A Kernel methods and Renyi’s entropy estimation 106 A.1 Reproducing kernel Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . 106 A.1.1 Reproducing kernels . . . . . . . . . . . . . . . . . . . . . . . . . 106 A.1.2 Kernel-based learning . . . . . . . . . . . . . . . . . . . . . . . . . 107 A.2 Kernel-based estimation of Renyi’s entropy . . . . . . . . . . . . . . . . . 109 Appendix B Surface Laplacian 113 Appendix C Permutation testing 115 Appendix D Kernel-based relevance analysis 117 Appendix E Cao’s criterion 120 Appendix F Neural mass model equations 122 References 12

    An information theoretic learning framework based on Renyi’s α entropy for brain effective connectivity estimation

    Get PDF
    The interactions among neural populations distributed across different brain regions are at the core of cognitive and perceptual processing. Therefore, the ability of studying the flow of information within networks of connected neural assemblies is of fundamental importance to understand such processes. In that regard, brain connectivity measures constitute a valuable tool in neuroscience. They allow assessing functional interactions among brain regions through directed or non-directed statistical dependencies estimated from neural time series. Transfer entropy (TE) is one such measure. It is an effective connectivity estimation approach based on information theory concepts and statistical causality premises. It has gained increasing attention in the literature because it can capture purely nonlinear directed interactions, and is model free. That is to say, it does not require an initial hypothesis about the interactions present in the data. These properties make it an especially convenient tool in exploratory analyses. However, like any information-theoretic quantity, TE is defined in terms of probability distributions that in practice need to be estimated from data. A challenging task, whose outcome can significantly affect the results of TE. Also, it lacks a standard spectral representation, so it cannot reveal the local frequency band characteristics of the interactions it detects.Las interacciones entre poblaciones neuronales distribuidas en diferentes regiones del cerebro son el núcleo del procesamiento cognitivo y perceptivo. Por lo tanto, la capacidad de estudiar el flujo de información dentro de redes de conjuntos neuronales conectados es de fundamental importancia para comprender dichos procesos. En ese sentido, las medidas de conectividad cerebral constituyen una valiosa herramienta en neurociencia. Permiten evaluar interacciones funcionales entre regiones cerebrales a través de dependencias estadísticas dirigidas o no dirigidas estimadas a partir de series de tiempo. La transferencia de entropía (TE) es una de esas medidas. Es un enfoque de estimación de conectividad efectiva basada en conceptos de teoría de la información y premisas de causalidad estadística. Ha ganado una atención cada vez mayor en la literatura porque puede capturar interacciones dirigidas puramente no lineales y no depende de un modelo. Es decir, no requiere de una hipótesis inicial sobre las interacciones presentes en los datos. Estas propiedades la convierten en una herramienta especialmente conveniente en análisis exploratorios. Sin embargo, como cualquier concepto basado en teoría de la información, la TE se define en términos de distribuciones de probabilidad que en la práctica deben estimarse a partir de datos. Una tarea desafiante, cuyo resultado puede afectar significativamente los resultados de la TE. Además, carece de una representación espectral estándar, por lo que no puede revelar las características de banda de frecuencia local de las interacciones que detecta.DoctoradoDoctor(a) en IngenieríaContents List of Figures xi List of Tables xv Notation xvi 1 Preliminaries 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Probability distribution estimation as an intermediate step in TE computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 The lack of a spectral representation for TE . . . . . . . . . . . . 7 1.3 Theoretical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.1 Transfer entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.2 Granger causality . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.3 Information theoretic learning from kernel matrices . . . . . . . . 12 1.4 Literature review on transfer entropy estimation . . . . . . . . . . . . . . 14 1.4.1 Transfer entropy in the frequency domain . . . . . . . . . . . . . . 17 1.5 Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.1 General aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.2 Specific aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.6 Outline and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.6.1 Kernel-based Renyi’s transfer entropy . . . . . . . . . . . . . . . . 24 1.6.2 Kernel-based Renyi’s phase transfer entropy . . . . . . . . . . . . 24 1.6.3 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions . . . . . . . . . . . . . . . . 25 1.7 EEG databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Contents ix 1.7.1 Motor imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.7.2 Working memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.8 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2 Kernel-based Renyi’s transfer entropy 34 2.1 Kernel-based Renyi’s transfer entropy . . . . . . . . . . . . . . . . . . . . 35 2.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.1 VAR model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.2 Modified linear Kus model . . . . . . . . . . . . . . . . . . . . . . 38 2.2.3 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.2.4 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.1 VAR model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.2 Modified linear Kus model . . . . . . . . . . . . . . . . . . . . . . 46 2.3.3 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.3.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3 Kernel-based Renyi’s phase transfer entropy 60 3.1 Kernel-based Renyi’s phase transfer entropy . . . . . . . . . . . . . . . . 61 3.1.1 Phase-based effective connectivity estimation approaches considered in this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2.1 Neural mass models . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.2.3 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.1 Neural mass models . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions 84 4.1 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions . . . . . . . . . . . . . . . . . . . . . . . . . 85 x Contents 4.1.1 Transfer entropy for directed phase-amplitude interactions . . . . 85 4.1.2 Cross-frequency directionality . . . . . . . . . . . . . . . . . . . . 85 4.1.3 Phase transfer entropy and directed phase-amplitude interactions 86 4.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.2.1 Simulated phase-amplitude interactions . . . . . . . . . . . . . . . 88 4.2.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.3 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.3.1 Simulated phase-amplitude interactions . . . . . . . . . . . . . . . 92 4.3.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5 Final Remarks 100 5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.3 Academic products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3.1 Journal papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3.2 Conference papers . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.3.3 Conference presentations . . . . . . . . . . . . . . . . . . . . . . . 105 Appendix A Kernel methods and Renyi’s entropy estimation 106 A.1 Reproducing kernel Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . 106 A.1.1 Reproducing kernels . . . . . . . . . . . . . . . . . . . . . . . . . 106 A.1.2 Kernel-based learning . . . . . . . . . . . . . . . . . . . . . . . . . 107 A.2 Kernel-based estimation of Renyi’s entropy . . . . . . . . . . . . . . . . . 109 Appendix B Surface Laplacian 113 Appendix C Permutation testing 115 Appendix D Kernel-based relevance analysis 117 Appendix E Cao’s criterion 120 Appendix F Neural mass model equations 122 References 12
    corecore