1,421,776 research outputs found

    Modular Networks: Learning to Decompose Neural Computation

    Get PDF
    Scaling model capacity has been vital in the success of deep learning. For a typical network, necessary compute resources and training time grow dramatically with model size. Conditional computation is a promising way to increase the number of parameters with a relatively small increase in resources. We propose a training algorithm that flexibly chooses neural modules based on the data to be processed. Both the decomposition and modules are learned end-to-end. In contrast to existing approaches, training does not rely on regularization to enforce diversity in module use. We apply modular networks both to image recognition and language modeling tasks, where we achieve superior performance compared to several baselines. Introspection reveals that modules specialize in interpretable contexts.Comment: NIPS 201

    Deep Learning for Forecasting Stock Returns in the Cross-Section

    Full text link
    Many studies have been undertaken by using machine learning techniques, including neural networks, to predict stock returns. Recently, a method known as deep learning, which achieves high performance mainly in image recognition and speech recognition, has attracted attention in the machine learning field. This paper implements deep learning to predict one-month-ahead stock returns in the cross-section in the Japanese stock market and investigates the performance of the method. Our results show that deep neural networks generally outperform shallow neural networks, and the best networks also outperform representative machine learning models. These results indicate that deep learning shows promise as a skillful machine learning method to predict stock returns in the cross-section.Comment: 12 pages, 2 figures, 8 tables, accepted at PAKDD 201

    Individual and global adaptation in networks

    No full text
    The structure of complex biological and socio-economic networks affects the selective pressures or behavioural incentives of components in that network, and reflexively, the evolution/behaviour of individuals in those networks changes the structure of such networks over time. Such ‘adaptive networks’ underlie how gene-regulation networks evolve, how ecological networks self-organise, and how networks of strategic agents co-create social organisations. Although such domains are different in the details, they can each be characterised as networks of self-interested agents where agents alter network connections in the direction that increases their individual utility. Recent work shows that such dynamics are equivalent to associative learning, well-understood in the context of neural networks. Associative learning in neural substrates is the result of mandated learning rules (e.g. Hebbian learning), but in networks of autonomous agents ‘associative induction’ occurs as a result of local individual incentives to alter connections. Using results from a number of recent studies, here we review the theoretical principles that can be transferred between disciplines as a result of this isomorphism, and the implications for the organisation of genetic, social and ecological networks
    • 

    corecore