On Explainable Deep Learning for Macroeconomic Forecasting and Finance

Abstract

Deep Learning (DL) has gained momentum in recent years due to its incredible generalisation performance achieved across many learning tasks. Nevertheless, practitioners and academics have sometime been reluctant to apply these models because perceived as black boxes. This is particularly problematic in Economics and Finance. The objective of this thesis is to develop interpretable DL models and explainable DL tools with a focus on macroeconomic and financial applications. In doing so we highlight connections between such models and the standard economic ones. The first part of this work introduces a new class of interpretable models called Deep Dynamic Factor Models. The study merges the DL literature on autoencoders with that of the Econometrics on Dynamic Factor Models. Empirical validations of the approach are carried out both on synthetic and on real-time macroeconomic data. Part two of the work analyses feature attribution methods and Shapley values among explainability tools that are used to additively decompose model predictions. One of their limitations is highlighted, given that it is necessary to define a baseline that represents the missingness of a feature. A solution to the problem is proposed and compared against the ones currently in use both on simulated data and in the financial context of credit card default. We show that the proposed baseline is the only one that accounts for the specific use of the model. The final part of the work discusses the use of DL techniques for dynamic asset allocation. Using US market data, a comparison in recursive out-of-sample among different machine learning, economic-financial and hybrid models, including the one introduced in the first part of the work, is performed. Finally, a nonlinear factor-based portfolio performance attribution via the use of Shapley values and the baseline proposed in part two of the work is presented

    Similar works