26,645 research outputs found

    Input window size and neural network predictors

    Get PDF
    Neural network approaches to time series prediction are briefly discussed, and the need to specify an appropriately sized input window identified. Relevant theoretical results from dynamic systems theory are briefly introduced, and heuristics for finding the correct embedding dimension, and hence window size, are discussed. The method is applied to two time series and the resulting generalisation performance of the trained feedforward neural network predictors is analysed. It is shown that the heuristics can provide useful information in defining the appropriate network architectur

    Learning to Embed Words in Context for Syntactic Tasks

    Full text link
    We present models for embedding words in the context of surrounding words. Such models, which we refer to as token embeddings, represent the characteristics of a word that are specific to a given context, such as word sense, syntactic category, and semantic role. We explore simple, efficient token embedding models based on standard neural network architectures. We learn token embeddings on a large amount of unannotated text and evaluate them as features for part-of-speech taggers and dependency parsers trained on much smaller amounts of annotated data. We find that predictors endowed with token embeddings consistently outperform baseline predictors across a range of context window and training set sizes.Comment: Accepted by ACL 2017 Repl4NLP worksho

    A Long Short-Term Memory Recurrent Neural Network Framework for Network Traffic Matrix Prediction

    Full text link
    Network Traffic Matrix (TM) prediction is defined as the problem of estimating future network traffic from the previous and achieved network traffic data. It is widely used in network planning, resource management and network security. Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that is well-suited to learn from experience to classify, process and predict time series with time lags of unknown size. LSTMs have been shown to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we propose a LSTM RNN framework for predicting short and long term Traffic Matrix (TM) in large networks. By validating our framework on real-world data from GEANT network, we show that our LSTM models converge quickly and give state of the art TM prediction performance for relatively small sized models.Comment: Submitted for peer review. arXiv admin note: text overlap with arXiv:1402.1128 by other author

    NeuTM: A Neural Network-based Framework for Traffic Matrix Prediction in SDN

    Full text link
    This paper presents NeuTM, a framework for network Traffic Matrix (TM) prediction based on Long Short-Term Memory Recurrent Neural Networks (LSTM RNNs). TM prediction is defined as the problem of estimating future network traffic matrix from the previous and achieved network traffic data. It is widely used in network planning, resource management and network security. Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that is well-suited to learn from data and classify or predict time series with time lags of unknown size. LSTMs have been shown to model long-range dependencies more accurately than conventional RNNs. NeuTM is a LSTM RNN-based framework for predicting TM in large networks. By validating our framework on real-world data from GEEANT network, we show that our model converges quickly and gives state of the art TM prediction performance.Comment: Submitted to NOMS18. arXiv admin note: substantial text overlap with arXiv:1705.0569
    corecore