1,773 research outputs found

    Classification of Biomedical Signals using the Dynamics of the False Nearest Neighbours (DFNN) Algorithm

    Get PDF
    * This study was supported in part by the Natural Sciences and Engineering Research Council of Canada, and by the Gastrointestinal Motility Laboratory (University of Alberta Hospitals) in Edmonton, Alberta, Canada.Accurate and efficient analysis of biomedical signals can be facilitated by proper identification based on their dominant dynamic characteristics (deterministic, chaotic or random). Specific analysis techniques exist to study the dynamics of each of these three categories of signals. However, comprehensive and yet adequately simple screening tools to appropriately classify an unknown incoming biomedical signal are still lacking. This study is aimed at presenting an efficient and simple method to classify model signals into the three categories of deterministic, random or chaotic, using the dynamics of the False Nearest Neighbours (DFNN) algorithm, and then to utilize the developed classification method to assess how some specific biomedical signals position with respect to these categories. Model deterministic, chaotic and random signals were subjected to state space decomposition, followed by specific wavelet and statistical analysis aiming at deriving a comprehensive plot representing the three signal categories in clearly defined clusters. Previously recorded electrogastrographic (EGG) signals subjected to controlled, surgically-invoked uncoupling were submitted to the proposed algorithm, and were classified as chaotic. Although computationally intensive, the developed methodology was found to be extremely useful and convenient to use

    Autoregressive time series prediction by means of fuzzy inference systems using nonparametric residual variance estimation

    Get PDF
    We propose an automatic methodology framework for short- and long-term prediction of time series by means of fuzzy inference systems. In this methodology, fuzzy techniques and statistical techniques for nonparametric residual variance estimation are combined in order to build autoregressive predictive models implemented as fuzzy inference systems. Nonparametric residual variance estimation plays a key role in driving the identification and learning procedures. Concrete criteria and procedures within the proposed methodology framework are applied to a number of time series prediction problems. The learn from examples method introduced by Wang and Mendel (W&M) is used for identification. The Levenberg–Marquardt (L–M) optimization method is then applied for tuning. The W&M method produces compact and potentially accurate inference systems when applied after a proper variable selection stage. The L–M method yields the best compromise between accuracy and interpretability of results, among a set of alternatives. Delta test based residual variance estimations are used in order to select the best subset of inputs to the fuzzy inference systems as well as the number of linguistic labels for the inputs. Experiments on a diverse set of time series prediction benchmarks are compared against least-squares support vector machines (LS-SVM), optimally pruned extreme learning machine (OP-ELM), and k-NN based autoregressors. The advantages of the proposed methodology are shown in terms of linguistic interpretability, generalization capability and computational cost. Furthermore, fuzzy models are shown to be consistently more accurate for prediction in the case of time series coming from real-world applications.Ministerio de Ciencia e Innovación TEC2008-04920Junta de Andalucía P08-TIC-03674, IAC07-I-0205:33080, IAC08-II-3347:5626

    Applying an Improved MRPS-GMM Method to Detect Temporal Patterns in Dynamic Data System

    Get PDF
    The purpose of this thesis is to introduce an improved approach for the temporal pattern detection, which is based on the Multivariate Reconstructed Phase Space (MRPS) and the Gaussian Mixture Model (GMM), to overcome the disadvantage caused by the diversity of shapes among different temporal patterns in multiple nonlinear time series. Moreover, this thesis presents an applicable software program developed with MATLAB for users to utilize this approach. A major study involving dynamic data systems is to understand the correspondence between events of interest and predictive temporal patterns in the output observations, which can be used to develop a mechanism to predict the occurrence of events. The approach introduced in this thesis employs Expectation-Maximization (EM) algorithm to fit a more precise distribution for the data points embedded in the MRPS. Furthermore, it proposes an improved algorithm for the pattern classification process. As a result, the computational complexity will be reduced. A recently developed software program, MATPAD, is presented as a deliverable application of this approach. The GUI of this program contains specific functionalities so that users can directly implement the procedure of MRPS embedding and fit data distribution with GMM. Moreover, it allows users to customize the related parameters for specific problems so that users will be able to test their own data

    Training Echo State Networks with Regularization through Dimensionality Reduction

    Get PDF
    In this paper we introduce a new framework to train an Echo State Network to predict real valued time-series. The method consists in projecting the output of the internal layer of the network on a space with lower dimensionality, before training the output layer to learn the target task. Notably, we enforce a regularization constraint that leads to better generalization capabilities. We evaluate the performances of our approach on several benchmark tests, using different techniques to train the readout of the network, achieving superior predictive performance when using the proposed framework. Finally, we provide an insight on the effectiveness of the implemented mechanics through a visualization of the trajectory in the phase space and relying on the methodologies of nonlinear time-series analysis. By applying our method on well known chaotic systems, we provide evidence that the lower dimensional embedding retains the dynamical properties of the underlying system better than the full-dimensional internal states of the network

    Detecting Changes in Global Dynamics with Principal Curves and Information Theory

    Get PDF
    Two approaches to characterize global dynamics are developed in this dissertation. In particular, the concern is with nonlinear and chaotic time series obtained from physical systems. The objective is to identify the features that adequately characterize a time series, and can consequently be used for fault diagnosis and process monitoring, and for improved control. This study has two parts. The first part is concerned with obtaining a skeletal description of the data using Cluster-linked principal curves (CLPC). A CLPC is a non-parametric hypercurve that passes through the center of the data cloud, and is obtained through the iterative Expectation-Maximization (E-M) principle. The data points are then projected on the curve to yield a distribution of arc lengths along it. It is argued that if some conditions are met, the arc length distribution uniquely characterizes the dynamics. This is demonstrated by testing for stationarity and reversibility based on the arc length distributions. The second part explores the use of mutual information vector to characterize a system. The mutual information vector formed via symbolization is reduced in dimensionality and subjected to K-means clustering algorithm in order to examine stationarity and to compare different processes. The computations required to implement the techniques for online monitoring and fault diagnosis are reasonable enough to be carried out in real time. For illustration purposes time series measurements from a liquid-filled column with an electrified capillary and a fluidized bed are employed

    Predictive Pattern Discovery in Dynamic Data Systems

    Get PDF
    This dissertation presents novel methods for analyzing nonlinear time series in dynamic systems. The purpose of the newly developed methods is to address the event prediction problem through modeling of predictive patterns. Firstly, a novel categorization mechanism is introduced to characterize different underlying states in the system. A new hybrid method was developed utilizing both generative and discriminative models to address the event prediction problem through optimization in multivariate systems. Secondly, in addition to modeling temporal dynamics, a Bayesian approach is employed to model the first-order Markov behavior in the multivariate data sequences. Experimental evaluations demonstrated superior performance over conventional methods, especially when the underlying system is chaotic and has heterogeneous patterns during state transitions. Finally, the concept of adaptive parametric phase space is introduced. The equivalence between time-domain phase space and associated parametric space is theoretically analyzed

    ATM Cash demand forecasting in an Indian Bank with chaos and deep learning

    Full text link
    This paper proposes to model chaos in the ATM cash withdrawal time series of a big Indian bank and forecast the withdrawals using deep learning methods. It also considers the importance of day-of-the-week and includes it as a dummy exogenous variable. We first modelled the chaos present in the withdrawal time series by reconstructing the state space of each series using the lag, and embedding dimension found using an auto-correlation function and Cao's method. This process converts the uni-variate time series into multi variate time series. The "day-of-the-week" is converted into seven features with the help of one-hot encoding. Then these seven features are augmented to the multivariate time series. For forecasting the future cash withdrawals, using algorithms namely ARIMA, random forest (RF), support vector regressor (SVR), multi-layer perceptron (MLP), group method of data handling (GMDH), general regression neural network (GRNN), long short term memory neural network and 1-dimensional convolutional neural network. We considered a daily cash withdrawals data set from an Indian commercial bank. After modelling chaos and adding exogenous features to the data set, we observed improvements in the forecasting for all models. Even though the random forest (RF) yielded better Symmetric Mean Absolute Percentage Error (SMAPE) value, deep learning algorithms, namely LSTM and 1D CNN, showed similar performance compared to RF, based on t-test.Comment: 20 pages; 6 figures and 3 table

    Computational Intelligence in Electromyography Analysis

    Get PDF
    Electromyography (EMG) is a technique for evaluating and recording the electrical activity produced by skeletal muscles. EMG may be used clinically for the diagnosis of neuromuscular problems and for assessing biomechanical and motor control deficits and other functional disorders. Furthermore, it can be used as a control signal for interfacing with orthotic and/or prosthetic devices or other rehabilitation assists. This book presents an updated overview of signal processing applications and recent developments in EMG from a number of diverse aspects and various applications in clinical and experimental research. It will provide readers with a detailed introduction to EMG signal processing techniques and applications, while presenting several new results and explanation of existing algorithms. This book is organized into 18 chapters, covering the current theoretical and practical approaches of EMG research
    corecore