9,017 research outputs found

    Predicting expected TCP throughput using genetic algorithm

    Get PDF
    Predicting the expected throughput of TCP is important for several aspects such as e.g. determining handover criteria for future multihomed mobile nodes or determining the expected throughput of a given MPTCP subflow for load-balancing reasons. However, this is challenging due to time varying behavior of the underlying network characteristics. In this paper, we present a genetic-algorithm-based prediction model for estimating TCP throughput values. Our approach tries to find the best matching combination of mathematical functions that approximate a given time series that accounts for the TCP throughput samples using genetic algorithm. Based on collected historical datapoints about measured TCP throughput samples, our algorithm estimates expected throughput over time. We evaluate the quality of the prediction using different selection and diversity strategies for creating new chromosomes. Also, we explore the use of different fitness functions in order to evaluate the goodness of a chromosome. The goal is to show how different tuning on the genetic algorithm may have an impact on the prediction. Using extensive simulations over several TCP throughput traces, we find that the genetic algorithm successfully finds reasonable matching mathematical functions that allow to describe the TCP sampled throughput values with good fidelity. We also explore the effectiveness of predicting time series throughput samples for a given prediction horizon and estimate the prediction error and confidence.Peer ReviewedPostprint (author's final draft

    Data based identification and prediction of nonlinear and complex dynamical systems

    Get PDF
    We thank Dr. R. Yang (formerly at ASU), Dr. R.-Q. Su (formerly at ASU), and Mr. Zhesi Shen for their contributions to a number of original papers on which this Review is partly based. This work was supported by ARO under Grant No. W911NF-14-1-0504. W.-X. Wang was also supported by NSFC under Grants No. 61573064 and No. 61074116, as well as by the Fundamental Research Funds for the Central Universities, Beijing Nova Programme.Peer reviewedPostprin

    Training Echo State Networks with Regularization through Dimensionality Reduction

    Get PDF
    In this paper we introduce a new framework to train an Echo State Network to predict real valued time-series. The method consists in projecting the output of the internal layer of the network on a space with lower dimensionality, before training the output layer to learn the target task. Notably, we enforce a regularization constraint that leads to better generalization capabilities. We evaluate the performances of our approach on several benchmark tests, using different techniques to train the readout of the network, achieving superior predictive performance when using the proposed framework. Finally, we provide an insight on the effectiveness of the implemented mechanics through a visualization of the trajectory in the phase space and relying on the methodologies of nonlinear time-series analysis. By applying our method on well known chaotic systems, we provide evidence that the lower dimensional embedding retains the dynamical properties of the underlying system better than the full-dimensional internal states of the network

    Multivariate dynamic kernels for financial time series forecasting

    Get PDF
    The final publication is available at http://link.springer.com/chapter/10.1007/978-3-319-44781-0_40We propose a forecasting procedure based on multivariate dynamic kernels, with the capability of integrating information measured at different frequencies and at irregular time intervals in financial markets. A data compression process redefines the original financial time series into temporal data blocks, analyzing the temporal information of multiple time intervals. The analysis is done through multivariate dynamic kernels within support vector regression. We also propose two kernels for financial time series that are computationally efficient without a sacrifice on accuracy. The efficacy of the methodology is demonstrated by empirical experiments on forecasting the challenging S&P500 market.Peer ReviewedPostprint (author's final draft

    Efficient Optimization of Echo State Networks for Time Series Datasets

    Full text link
    Echo State Networks (ESNs) are recurrent neural networks that only train their output layer, thereby precluding the need to backpropagate gradients through time, which leads to significant computational gains. Nevertheless, a common issue in ESNs is determining its hyperparameters, which are crucial in instantiating a well performing reservoir, but are often set manually or using heuristics. In this work we optimize the ESN hyperparameters using Bayesian optimization which, given a limited budget of function evaluations, outperforms a grid search strategy. In the context of large volumes of time series data, such as light curves in the field of astronomy, we can further reduce the optimization cost of ESNs. In particular, we wish to avoid tuning hyperparameters per individual time series as this is costly; instead, we want to find ESNs with hyperparameters that perform well not just on individual time series but rather on groups of similar time series without sacrificing predictive performance significantly. This naturally leads to a notion of clusters, where each cluster is represented by an ESN tuned to model a group of time series of similar temporal behavior. We demonstrate this approach both on synthetic datasets and real world light curves from the MACHO survey. We show that our approach results in a significant reduction in the number of ESN models required to model a whole dataset, while retaining predictive performance for the series in each cluster
    • …
    corecore