29,920 research outputs found

    Measurements of inclusive jet and dijet cross sections at the Large Hadron Collider

    Full text link
    This review discusses the measurements of the inclusive jet and dijet cross section performed by the experimental collaborations at the LHC during what is now being called LHC Run 1 (2009 - 2013). It summarises some of the experimental challenges and the techniques used in the measurements of jets cross sections during the LHC Run 1.Comment: Contribution to "Jet Measurements at the LHC", G. Dissertori ed. To appear in International Journal of Modern Physics A (IJMPA

    Deep Learning for Forecasting Stock Returns in the Cross-Section

    Full text link
    Many studies have been undertaken by using machine learning techniques, including neural networks, to predict stock returns. Recently, a method known as deep learning, which achieves high performance mainly in image recognition and speech recognition, has attracted attention in the machine learning field. This paper implements deep learning to predict one-month-ahead stock returns in the cross-section in the Japanese stock market and investigates the performance of the method. Our results show that deep neural networks generally outperform shallow neural networks, and the best networks also outperform representative machine learning models. These results indicate that deep learning shows promise as a skillful machine learning method to predict stock returns in the cross-section.Comment: 12 pages, 2 figures, 8 tables, accepted at PAKDD 201

    WMT 2016 Multimodal translation system description based on bidirectional recurrent neural networks with double-embeddings

    Get PDF
    Bidirectional Recurrent Neural Networks (BiRNNs) have shown outstanding results on sequence-to-sequence learning tasks. This architecture becomes specially interesting for multimodal machine translation task, since BiRNNs can deal with images and text. On most translation systems the same word embedding is fed to both BiRNN units. In this paper, we present several experiments to enhance a baseline sequence-to-sequence system (Elliott et al., 2015), for example, by using double embeddings. These embeddings are trained on the forward and backward direction of the input sequence. Our system is trained, validated and tested on the Multi30K dataset (Elliott et al., 2016) in the context of theWMT 2016Multimodal Translation Task. The obtained results show that thedouble-embedding approach performs significantly better than the traditional single-embedding one.Postprint (published version

    Neural Natural Language Inference Models Enhanced with External Knowledge

    Full text link
    Modeling natural language inference is a very challenging task. With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform natural language inference (NLI) from these data? If not, how can neural-network-based NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we enrich the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models improve neural NLI models to achieve the state-of-the-art performance on the SNLI and MultiNLI datasets.Comment: Accepted by ACL 201
    • …
    corecore