3,470 research outputs found

    Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction

    Get PDF
    Despite the recent popularity of deep generative state space models, few comparisons have been made between network architectures and the inference steps of the Bayesian filtering framework -- with most models simultaneously approximating both state transition and update steps with a single recurrent neural network (RNN). In this paper, we introduce the Recurrent Neural Filter (RNF), a novel recurrent autoencoder architecture that learns distinct representations for each Bayesian filtering step, captured by a series of encoders and decoders. Testing this on three real-world time series datasets, we demonstrate that the decoupled representations learnt not only improve the accuracy of one-step-ahead forecasts while providing realistic uncertainty estimates, but also facilitate multistep prediction through the separation of encoder stages

    DeepLOB: Deep Convolutional Neural Networks for Limit Order Books

    Get PDF
    We develop a large-scale deep learning model to predict price movements from limit order book (LOB) data of cash equities. The architecture utilises convolutional filters to capture the spatial structure of the limit order books as well as LSTM modules to capture longer time dependencies. The proposed network outperforms all existing state-of-the-art algorithms on the benchmark LOB dataset [1]. In a more realistic setting, we test our model by using one year market quotes from the London Stock Exchange and the model delivers a remarkably stable out-of-sample prediction accuracy for a variety of instruments. Importantly, our model translates well to instruments which were not part of the training set, indicating the model's ability to extract universal features. In order to better understand these features and to go beyond a "black box" model, we perform a sensitivity analysis to understand the rationale behind the model predictions and reveal the components of LOBs that are most relevant. The ability to extract robust features which translate well to other instruments is an important property of our model which has many other applications.Comment: 12 pages, 9 figure

    Modelling strong interactions and longitudinally polarized vector boson scattering

    Get PDF
    We study scattering of the electroweak gauge bosons in 5D warped models. Within two different models we determine the precise manner in which the Higgs boson and the vector resonances ensure the unitarity of longitudinal vector boson scattering. We identify three separate scales that determine the dynamics of the scattering process in all cases. For a quite general background geometry of 5D, these scales can be linked to a simple functional of the warp factor. The models smoothly interpolate between a `composite' Higgs limit and a Higgsless limit. By holographic arguments, these models provide an effective description of vector boson scattering in 4D models with a strongly coupled electroweak breaking sector.Comment: 30 pages, no figure
    corecore