346 research outputs found
Amplitude- and Fluctuation-based Dispersion Entropy
Dispersion entropy (DispEn) is a recently introduced entropy metric to quantify the uncertainty of time series. It is fast and, so far, it has demonstrated very good performance in the characterisation of time series. It includes a mapping step, but the effect of different mappings has not been studied yet. Here, we investigate the effect of linear and nonlinear mapping approaches in DispEn. We also inspect the sensitivity of different parameters of DispEn to noise. Moreover, we develop fluctuation-based DispEn (FDispEn) as a measure to deal with only the fluctuations of time series. Furthermore, the original and fluctuation-based forbidden dispersion patterns are introduced to discriminate deterministic from stochastic time series. Finally, we compare the performance of DispEn, FDispEn, permutation entropy, sample entropy, and Lempel–Ziv complexity on two physiological datasets. The results show that DispEn is the most consistent technique to distinguish various dynamics of the biomedical signals. Due to their advantages over existing entropy methods, DispEn and FDispEn are expected to be broadly used for the characterization of a wide variety of real-world time series. The MATLAB codes used in this paper are freely available at http://dx.doi.org/10.7488/ds/2326
Dispersion entropy: A Measure of Irregularity for Graph Signals
We introduce a novel method, called Dispersion Entropy for Graph Signals,
, as a powerful tool for analysing the irregularity of signals defined on
graphs. We demonstrate the effectiveness of in detecting changes in the
dynamics of signals defined on synthetic and real-world graphs, by defining
mixed processing on random geometric graphs or those exhibiting with
small-world properties. Remarkably, generalises the classical dispersion
entropy for univariate time series, enabling its application in diverse domains
such as image processing, time series analysis, and network analysis, as well
as in establishing theoretical relationships (i.e., graph centrality measures,
spectrum). Our results indicate that effectively captures the
irregularity of graph signals across various network configurations,
successfully differentiating between distinct levels of randomness and
connectivity. Consequently, provides a comprehensive framework for
entropy analysis of various data types, enabling new applications of dispersion
entropy not previously feasible, and revealing relationships between graph
signals and its graph topology.Comment: 9 pages, 10 figures, 1 tabl
Classification of partial discharge EMI conditions using permutation entropy-based features
In this paper we investigate the application of feature extraction and machine learning techniques to fault identification in power systems. Specifically we implement the novel application of Permutation Entropy-based measures known as Weighted Permutation and Dispersion Entropy to field Electro- Magnetic Interference (EMI) signals for classification of discharge sources, also called conditions, such as partial discharge, arcing and corona which arise from various assets of different power sites. This work introduces two main contributions: the application of entropy measures in condition monitoring and the classification of real field EMI captured signals. The two simple and low dimension features are fed to a Multi-Class Support Vector Machine for the classification of different discharge sources contained in the EMI signals. Classification was performed to distinguish between the conditions observed within each site and between all sites. Results demonstrate that the proposed approach separated and identified the discharge sources successfully
Entropy-based feature extraction for electromagnetic discharges classification in high-voltage power generation
This work exploits four entropy measures known as Sample, Permutation, Weighted Permutation, and Dispersion Entropy to extract relevant information from Electromagnetic Interference (EMI) discharge signals that are useful in fault diagnosis of High-Voltage (HV) equipment. Multi-class classification algorithms are used to classify or distinguish between various discharge sources such as Partial Discharges (PD), Exciter, Arcing, micro Sparking and Random Noise. The signals were measured and recorded on different sites followed by EMI expert’s data analysis in order to identify and label the discharge source type contained within the signal. The classification was performed both within each site and across all sites. The system performs well for both cases with extremely high classification accuracy within site. This work demonstrates the ability to extract relevant entropy-based features from EMI discharge sources from time-resolved signals requiring minimal computation making the system ideal for a potential application to online condition monitoring based on EMI
Stratified Multivariate Multiscale Dispersion Entropy for Physiological Signal Analysis
Multivariate Entropy quantification algorithms are becoming a prominent tool
for the extraction of information from multi-channel physiological time-series.
However, in the analysis of physiological signals from heterogeneous organ
systems, certain channels may overshadow the patterns of others, resulting in
information loss. Here, we introduce the framework of Stratified Entropy to
prioritize each channels' dynamics based on their allocation to respective
strata, leading to a richer description of the multi-channel time-series. As an
implementation of the framework, three algorithmic variations of the Stratified
Multivariate Multiscale Dispersion Entropy are introduced. These variations and
the original algorithm are applied to synthetic time-series, waveform
physiological time-series, and derivative physiological data . Based on the
synthetic time-series experiments, the variations successfully prioritize
channels following their strata allocation while maintaining the low
computation time of the original algorithm. In experiments on waveform
physiological time-series and derivative physiological data, increased
discrimination capacity was noted for multiple strata allocations in the
variations when benchmarked to the original algorithm. This suggests improved
physiological state monitoring by the variations. Furthermore, our variations
can be modified to utilize a priori knowledge for the stratification of
channels. Thus, our research provides a novel approach for the extraction of
previously inaccessible information from multi-channel time series acquired
from heterogeneous systems
Coarse-graining Approaches in Univariate Multiscale Sample and Dispersion Entropy
The evaluation of complexity in univariate signals has attracted considerable attention in recent years. This is often done using the framework of Multiscale Entropy, which entails two basic steps: coarse-graining to consider multiple temporal scales, and evaluation of irregularity for each of those scales with entropy estimators. Recent developments in the field have proposed modifications to this approach to facilitate the analysis of short-time series. However, the role of the downsampling in the classical coarse-graining process and its relationships with alternative filtering techniques has not been systematically explored yet. Here, we assess the impact of coarse-graining in multiscale entropy estimations based on both Sample Entropy and Dispersion Entropy. We compare the classical moving average approach with low-pass Butterworth filtering, both with and without downsampling, and empirical mode decomposition in Intrinsic Multiscale Entropy, in selected synthetic data and two real physiological datasets. The results show that when the sampling frequency is low or high, downsampling respectively decreases or increases the entropy values. Our results suggest that, when dealing with long signals and relatively low levels of noise, the refine composite method makes little difference in the quality of the entropy estimation at the expense of considerable additional computational cost. It is also found that downsampling within the coarse-graining procedure may not be required to quantify the complexity of signals, especially for short ones. Overall, we expect these results to contribute to the ongoing discussion about the development of stable, fast and robust-to-noise multiscale entropy techniques suited for either short or long recordings
- …