166,723 research outputs found

    Transfer learning approach for financial applications

    Full text link
    Artificial neural networks learn how to solve new problems through a computationally intense and time consuming process. One way to reduce the amount of time required is to inject preexisting knowledge into the network. To make use of past knowledge, we can take advantage of techniques that transfer the knowledge learned from one task, and reuse it on another (sometimes unrelated) task. In this paper we propose a novel selective breeding technique that extends the transfer learning with behavioural genetics approach proposed by Kohli, Magoulas and Thomas (2013), and evaluate its performance on financial data. Numerical evidence demonstrates the credibility of the new approach. We provide insights on the operation of transfer learning and highlight the benefits of using behavioural principles and selective breeding when tackling a set of diverse financial applications problems

    Learning From Major Accidents: A Meta-Learning Perspective

    Get PDF
    Learning from the past is essential to improve safety and reliability in the chemical industry. In the context of Industry 4.0 and Industry 5.0, where Artificial Intelligence and IoT are expanding throughout every industrial sector, it is essential to determine if an artificial learner may exploit historical accident data to support a more efficient and sustainable learning framework. One important limitation of Machine Learning algorithms is their difficulty in generalizing over multiple tasks. In this context, the present study aims to investigate the issue of meta-learning and transfer learning, evaluating whether the knowledge extracted from a generic accident database could be used to predict the consequence of new, technology-specific accidents. To this end, a classi-fication algorithm is trained on a large and generic accident database to learn the relationship between accident features and consequence severity from a diverse pool of examples. Later, the acquired knowledge is transferred to another domain to predict the number of fatalities and injuries in new accidents. The methodology is eval-uated on a test case, where two classification algorithms are trained on a generic accident database (i.e., the Major Hazard Incident Data Service) and evaluated on a technology-specific, lower-quality database. The results suggest that automated algorithms can learn from historical data and transfer knowledge to predict the severity of different types of accidents. The findings indicate that the knowledge gained from previous tasks might be used to address new tasks. Therefore, the proposed approach reduces the need for new data and the cost of the analyses

    Towards a Unified View of Affinity-Based Knowledge Distillation

    Full text link
    Knowledge transfer between artificial neural networks has become an important topic in deep learning. Among the open questions are what kind of knowledge needs to be preserved for the transfer, and how it can be effectively achieved. Several recent work have shown good performance of distillation methods using relation-based knowledge. These algorithms are extremely attractive in that they are based on simple inter-sample similarities. Nevertheless, a proper metric of affinity and use of it in this context is far from well understood. In this paper, by explicitly modularising knowledge distillation into a framework of three components, i.e. affinity, normalisation, and loss, we give a unified treatment of these algorithms as well as study a number of unexplored combinations of the modules. With this framework we perform extensive evaluations of numerous distillation objectives for image classification, and obtain a few useful insights for effective design choices while demonstrating how relation-based knowledge distillation could achieve comparable performance to the state of the art in spite of the simplicity
    corecore