10 research outputs found

    A Hypothesis on Good Practices for AI-based Systems for Financial Time Series Forecasting: Towards Domain-Driven XAI Methods

    Full text link
    Machine learning and deep learning have become increasingly prevalent in financial prediction and forecasting tasks, offering advantages such as enhanced customer experience, democratising financial services, improving consumer protection, and enhancing risk management. However, these complex models often lack transparency and interpretability, making them challenging to use in sensitive domains like finance. This has led to the rise of eXplainable Artificial Intelligence (XAI) methods aimed at creating models that are easily understood by humans. Classical XAI methods, such as LIME and SHAP, have been developed to provide explanations for complex models. While these methods have made significant contributions, they also have limitations, including computational complexity, inherent model bias, sensitivity to data sampling, and challenges in dealing with feature dependence. In this context, this paper explores good practices for deploying explainability in AI-based systems for finance, emphasising the importance of data quality, audience-specific methods, consideration of data properties, and the stability of explanations. These practices aim to address the unique challenges and requirements of the financial industry and guide the development of effective XAI tools.Comment: 11 pages, 1 figur

    Multiple-instance Learning as a Framework to Explain with Shapley Coefficients

    Get PDF
    Explainability and interpretability have become questions of fundamental importance for a safe and responsible deployment of modern machine learning models in high-stakes scenarios. Many examples exist of accidental behavior of autonomous systems that systematically under perform on minorities, or emulate hateful human behavior. Notwithstanding the recent advances in fair and interpretable machine learning, several theoretical issues remain open on the validity of popular explanation methods. In this thesis, we study multiple-instance learning as a framework to explaining model predictions with Shapley coefficients. In particular, we focus on local explanations, i.e. we seek to find the most important features in an input towards a model’s prediction. We show that a principle approach to explainability can produce fast and exact explanation methods that provide precise mathematical guarantees on their speed and accuracy. We apply our new explanation method to a medical imaging task of clinical importance– intracranial hemorrhage detection–where the use of autonomous systems can support radiologists in their daily work, for example, by prioritizing the most severe cases or provide a second opinion for subtle ones. We find that an explainability-driven approach can significantly reduce the number of labels needed to train a model, and therefore make collecting new datasets cheaper

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described
    corecore