25,782 research outputs found
European exchange trading funds trading with locally weighted support vector regression
In this paper, two different Locally Weighted Support Vector Regression (wSVR) algorithms are generated and applied to the task of forecasting and trading five European Exchange Traded Funds. The trading application covers the recent European Monetary Union debt crisis. The performance of the proposed models is benchmarked against traditional Support Vector Regression (SVR) models. The Radial Basis Function, the Wavelet and the Mahalanobis kernel are explored and tested as SVR kernels. Finally, a novel statistical SVR input selection procedure is introduced based on a principal component analysis and the Hansen, Lunde, and Nason (2011) model confidence test. The results demonstrate the superiority of the wSVR models over the traditional SVRs and of the v-SVR over the Δ-SVR algorithms. We note that the performance of all models varies and considerably deteriorates in the peak of the debt crisis. In terms of the kernels, our results do not confirm the belief that the Radial Basis Function is the optimum choice for financial series
Predicting expected TCP throughput using genetic algorithm
Predicting the expected throughput of TCP is important for several aspects such as e.g. determining handover criteria for future multihomed mobile nodes or determining the expected throughput of a given MPTCP subflow for load-balancing reasons. However, this is challenging due to time varying behavior of the underlying network characteristics. In this paper, we present a genetic-algorithm-based prediction model for estimating TCP throughput values. Our approach tries to find the best matching combination of mathematical functions that approximate a given time series that accounts for the TCP throughput samples using genetic algorithm. Based on collected historical datapoints about measured TCP throughput samples, our algorithm estimates expected throughput over time. We evaluate the quality of the prediction using different selection and diversity strategies for creating new chromosomes. Also, we explore the use of different fitness functions in order to evaluate the goodness of a chromosome. The goal is to show how different tuning on the genetic algorithm may have an impact on the prediction. Using extensive simulations over several TCP throughput traces, we find that the genetic algorithm successfully finds reasonable matching mathematical functions that allow to describe the TCP sampled throughput values with good fidelity. We also explore the effectiveness of predicting time series throughput samples for a given prediction horizon and estimate the prediction error and confidence.Peer ReviewedPostprint (author's final draft
Optimal model-free prediction from multivariate time series
Forecasting a time series from multivariate predictors constitutes a
challenging problem, especially using model-free approaches. Most techniques,
such as nearest-neighbor prediction, quickly suffer from the curse of
dimensionality and overfitting for more than a few predictors which has limited
their application mostly to the univariate case. Therefore, selection
strategies are needed that harness the available information as efficiently as
possible. Since often the right combination of predictors matters, ideally all
subsets of possible predictors should be tested for their predictive power, but
the exponentially growing number of combinations makes such an approach
computationally prohibitive. Here a prediction scheme that overcomes this
strong limitation is introduced utilizing a causal pre-selection step which
drastically reduces the number of possible predictors to the most predictive
set of causal drivers making a globally optimal search scheme tractable. The
information-theoretic optimality is derived and practical selection criteria
are discussed. As demonstrated for multivariate nonlinear stochastic delay
processes, the optimal scheme can even be less computationally expensive than
commonly used sub-optimal schemes like forward selection. The method suggests a
general framework to apply the optimal model-free approach to select variables
and subsequently fit a model to further improve a prediction or learn
statistical dependencies. The performance of this framework is illustrated on a
climatological index of El Ni\~no Southern Oscillation.Comment: 14 pages, 9 figure
Recommended from our members
Forecasting UK real estate cycle phases with leading indicators: a probit approach
This paper examines the significance of widely used leading indicators of the UK economy for predicting the cyclical pattern of commercial real estate performance. The analysis uses monthly capital value data for UK industrials, offices and retail from the Investment Property Databank (IPD). Prospective economic indicators are drawn from three sources namely, the series used by the US Conference Board to construct their UK leading indicator and the series deployed by two private organisations, Lombard Street Research and NTC Research, to predict UK economic activity. We first identify turning points in the capital value series adopting techniques employed in the classical business cycle literature. We then estimate probit models using the leading economic indicators as independent variables and forecast the probability of different phases of capital values, that is, periods of declining and rising capital values. The forecast performance of the models is tested and found to be satisfactory. The predictability of lasting directional changes in property performance represents a useful tool for real estate investment decision-making
Evolutionary Selection of Individual Expectations and Aggregate Outcomes
In recent 'learning to forecast' experiments with human subjects (Hommes, et al. 2005), three different patterns in aggregate asset price behavior have been observed: slow monotonic convergence, permanent oscillations and dampened fluctuations. We construct a simple model of individual learning, based on performance based evolutionary selectionor reinforcement learning among heterogeneous expectations rules, explaining these different aggregate outcomes. Out-of-sample predictive power of our switching model is higher compared to the rational or other homogeneous expectations benchmarks. Our results show that heterogeneity in expectations is crucial to describe individual forecasting behavior as well as aggregate price behavior.
Bayesian Recurrent Neural Network Models for Forecasting and Quantifying Uncertainty in Spatial-Temporal Data
Recurrent neural networks (RNNs) are nonlinear dynamical models commonly used
in the machine learning and dynamical systems literature to represent complex
dynamical or sequential relationships between variables. More recently, as deep
learning models have become more common, RNNs have been used to forecast
increasingly complicated systems. Dynamical spatio-temporal processes represent
a class of complex systems that can potentially benefit from these types of
models. Although the RNN literature is expansive and highly developed,
uncertainty quantification is often ignored. Even when considered, the
uncertainty is generally quantified without the use of a rigorous framework,
such as a fully Bayesian setting. Here we attempt to quantify uncertainty in a
more formal framework while maintaining the forecast accuracy that makes these
models appealing, by presenting a Bayesian RNN model for nonlinear
spatio-temporal forecasting. Additionally, we make simple modifications to the
basic RNN to help accommodate the unique nature of nonlinear spatio-temporal
data. The proposed model is applied to a Lorenz simulation and two real-world
nonlinear spatio-temporal forecasting applications
Elicitability and backtesting: Perspectives for banking regulation
Conditional forecasts of risk measures play an important role in internal
risk management of financial institutions as well as in regulatory capital
calculations. In order to assess forecasting performance of a risk measurement
procedure, risk measure forecasts are compared to the realized financial losses
over a period of time and a statistical test of correctness of the procedure is
conducted. This process is known as backtesting. Such traditional backtests are
concerned with assessing some optimality property of a set of risk measure
estimates. However, they are not suited to compare different risk estimation
procedures. We investigate the proposal of comparative backtests, which are
better suited for method comparisons on the basis of forecasting accuracy, but
necessitate an elicitable risk measure. We argue that supplementing traditional
backtests with comparative backtests will enhance the existing trading book
regulatory framework for banks by providing the correct incentive for accuracy
of risk measure forecasts. In addition, the comparative backtesting framework
could be used by banks internally as well as by researchers to guide selection
of forecasting methods. The discussion focuses on three risk measures,
Value-at-Risk, expected shortfall and expectiles, and is supported by a
simulation study and data analysis
- âŠ