1,115 research outputs found

    Training Echo State Networks with Regularization through Dimensionality Reduction

    Get PDF
    In this paper we introduce a new framework to train an Echo State Network to predict real valued time-series. The method consists in projecting the output of the internal layer of the network on a space with lower dimensionality, before training the output layer to learn the target task. Notably, we enforce a regularization constraint that leads to better generalization capabilities. We evaluate the performances of our approach on several benchmark tests, using different techniques to train the readout of the network, achieving superior predictive performance when using the proposed framework. Finally, we provide an insight on the effectiveness of the implemented mechanics through a visualization of the trajectory in the phase space and relying on the methodologies of nonlinear time-series analysis. By applying our method on well known chaotic systems, we provide evidence that the lower dimensional embedding retains the dynamical properties of the underlying system better than the full-dimensional internal states of the network

    Bidirectional deep-readout echo state networks

    Full text link
    We propose a deep architecture for the classification of multivariate time series. By means of a recurrent and untrained reservoir we generate a vectorial representation that embeds temporal relationships in the data. To improve the memorization capability, we implement a bidirectional reservoir, whose last state captures also past dependencies in the input. We apply dimensionality reduction to the final reservoir states to obtain compressed fixed size representations of the time series. These are subsequently fed into a deep feedforward network trained to perform the final classification. We test our architecture on benchmark datasets and on a real-world use-case of blood samples classification. Results show that our method performs better than a standard echo state network and, at the same time, achieves results comparable to a fully-trained recurrent network, but with a faster training

    A blind deconvolution approach to recover effective connectivity brain networks from resting state fMRI data

    Full text link
    A great improvement to the insight on brain function that we can get from fMRI data can come from effective connectivity analysis, in which the flow of information between even remote brain regions is inferred by the parameters of a predictive dynamical model. As opposed to biologically inspired models, some techniques as Granger causality (GC) are purely data-driven and rely on statistical prediction and temporal precedence. While powerful and widely applicable, this approach could suffer from two main limitations when applied to BOLD fMRI data: confounding effect of hemodynamic response function (HRF) and conditioning to a large number of variables in presence of short time series. For task-related fMRI, neural population dynamics can be captured by modeling signal dynamics with explicit exogenous inputs; for resting-state fMRI on the other hand, the absence of explicit inputs makes this task more difficult, unless relying on some specific prior physiological hypothesis. In order to overcome these issues and to allow a more general approach, here we present a simple and novel blind-deconvolution technique for BOLD-fMRI signal. Coming to the second limitation, a fully multivariate conditioning with short and noisy data leads to computational problems due to overfitting. Furthermore, conceptual issues arise in presence of redundancy. We thus apply partial conditioning to a limited subset of variables in the framework of information theory, as recently proposed. Mixing these two improvements we compare the differences between BOLD and deconvolved BOLD level effective networks and draw some conclusions

    Locally-Stable Macromodels of Integrated Digital Devices for Multimedia Applications

    Get PDF
    This paper addresses the development of accurate and efficient behavioral models of digital integrated circuits for the assessment of high-speed systems. Device models are based on suitable parametric expressions estimated from port transient responses and are effective at system level, where the quality of functional signals and the impact of supply noise need to be simulated. A potential limitation of some state-of-the-art modeling techniques resides in hidden instabilities manifesting themselves in the use of models, without being evident in the building phase of the same models. This contribution compares three recently-proposed model structures, and selects the local-linear state-space modeling technique as an optimal candidate for the signal integrity assessment of data links. In fact, this technique combines a simple verification of the local stability of models with a limited model size and an easy implementation in commercial simulation tools. An application of the proposed methodology to a real problem involving commercial devices and a data-link of a wireless device demonstrates the validity of this approac

    Reservoir computing approaches for representation and classification of multivariate time series

    Get PDF
    Classification of multivariate time series (MTS) has been tackled with a large variety of methodologies and applied to a wide range of scenarios. Reservoir Computing (RC) provides efficient tools to generate a vectorial, fixed-size representation of the MTS that can be further processed by standard classifiers. Despite their unrivaled training speed, MTS classifiers based on a standard RC architecture fail to achieve the same accuracy of fully trainable neural networks. In this paper we introduce the reservoir model space, an unsupervised approach based on RC to learn vectorial representations of MTS. Each MTS is encoded within the parameters of a linear model trained to predict a low-dimensional embedding of the reservoir dynamics. Compared to other RC methods, our model space yields better representations and attains comparable computational performance, thanks to an intermediate dimensionality reduction procedure. As a second contribution we propose a modular RC framework for MTS classification, with an associated open-source Python library. The framework provides different modules to seamlessly implement advanced RC architectures. The architectures are compared to other MTS classifiers, including deep learning models and time series kernels. Results obtained on benchmark and real-world MTS datasets show that RC classifiers are dramatically faster and, when implemented using our proposed representation, also achieve superior classification accuracy

    Modeling Covariate Effects in Group Independent Component Analysis with Applications to Functional Magnetic Resonance Imaging

    Full text link
    Independent component analysis (ICA) is a powerful computational tool for separating independent source signals from their linear mixtures. ICA has been widely applied in neuroimaging studies to identify and characterize underlying brain functional networks. An important goal in such studies is to assess the effects of subjects' clinical and demographic covariates on the spatial distributions of the functional networks. Currently, covariate effects are not incorporated in existing group ICA decomposition methods. Hence, they can only be evaluated through ad-hoc approaches which may not be accurate in many cases. In this paper, we propose a hierarchical covariate ICA model that provides a formal statistical framework for estimating and testing covariate effects in ICA decomposition. A maximum likelihood method is proposed for estimating the covariate ICA model. We develop two expectation-maximization (EM) algorithms to obtain maximum likelihood estimates. The first is an exact EM algorithm, which has analytically tractable E-step and M-step. Additionally, we propose a subspace-based approximate EM, which can significantly reduce computational time while still retain high model-fitting accuracy. Furthermore, to test covariate effects on the functional networks, we develop a voxel-wise approximate inference procedure which eliminates the needs of computationally expensive covariance estimation. The performance of the proposed methods is evaluated via simulation studies. The application is illustrated through an fMRI study of Zen meditation.Comment: 36 pages, 5 figure

    Event Detection and Predictive Maintenance using Component Echo State Networks

    Get PDF
    With a growing number of sensors collecting information about systems in indus- try and infrastructure, one wants to extract useful information from this data. The goal of this project is to investigate the applicability of Echo State Net- work techniques to time-varying classification of multivariate time series from primarily mechanical and electrical systems. Two relevant technical problems are predicting impending failure of systems (predictive maintenance), and clas- sifying a common event related to the system (event detection). In this project, they are formulated as a supervised machine learning problem on a multivariate time series. For this problem, Echo State Networks (ESN) have proven effective. However, applying these algorithms to new data sets involves a lot of guesswork as to how the algorithm should be configured to model the data effectively. In this work, a modification of the Echo State Network (ESN) model is presented, that helps to remove some of this guesswork. The new algorithm uses specifically structured components in order to facilitate the generation of relevant features by the ESN. The algorithm is tested on two easy event detection data sets, and one hard predictive maintenance data set. The results are compared to Support Vector Machine and Multilayer Perceptron classifiers, as well as to a basic ESN, which is also implemented as a reference. The component ESN successfully generates promising features, and outperforms the minimum complexity ESN as well as the standard classifiers
    • …
    corecore