35 research outputs found

    Interpretable Neural PDE Solvers using Symbolic Frameworks

    Full text link
    Partial differential equations (PDEs) are ubiquitous in the world around us, modelling phenomena from heat and sound to quantum systems. Recent advances in deep learning have resulted in the development of powerful neural solvers; however, while these methods have demonstrated state-of-the-art performance in both accuracy and computational efficiency, a significant challenge remains in their interpretability. Most existing methodologies prioritize predictive accuracy over clarity in the underlying mechanisms driving the model's decisions. Interpretability is crucial for trustworthiness and broader applicability, especially in scientific and engineering domains where neural PDE solvers might see the most impact. In this context, a notable gap in current research is the integration of symbolic frameworks (such as symbolic regression) into these solvers. Symbolic frameworks have the potential to distill complex neural operations into human-readable mathematical expressions, bridging the divide between black-box predictions and solutions.Comment: Accepted to the NeurIPS 2023 AI for Science Workshop. arXiv admin note: text overlap with arXiv:2310.1976

    Interpretability and Explainability: A Machine Learning Zoo Mini-tour

    Full text link
    In this review, we examine the problem of designing interpretable and explainable machine learning models. Interpretability and explainability lie at the core of many machine learning and statistical applications in medicine, economics, law, and natural sciences. Although interpretability and explainability have escaped a clear universal definition, many techniques motivated by these properties have been developed over the recent 30 years with the focus currently shifting towards deep learning methods. In this review, we emphasise the divide between interpretability and explainability and illustrate these two different research directions with concrete examples of the state-of-the-art. The review is intended for a general machine learning audience with interest in exploring the problems of interpretation and explanation beyond logistic regression or random forest variable importance. This work is not an exhaustive literature survey, but rather a primer focusing selectively on certain lines of research which the authors found interesting or informative

    Learning Channel Importance for High Content Imaging with Interpretable Deep Input Channel Mixing

    Full text link
    Uncovering novel drug candidates for treating complex diseases remain one of the most challenging tasks in early discovery research. To tackle this challenge, biopharma research established a standardized high content imaging protocol that tags different cellular compartments per image channel. In order to judge the experimental outcome, the scientist requires knowledge about the channel importance with respect to a certain phenotype for decoding the underlying biology. In contrast to traditional image analysis approaches, such experiments are nowadays preferably analyzed by deep learning based approaches which, however, lack crucial information about the channel importance. To overcome this limitation, we present a novel approach which utilizes multi-spectral information of high content images to interpret a certain aspect of cellular biology. To this end, we base our method on image blending concepts with alpha compositing for an arbitrary number of channels. More specifically, we introduce DCMIX, a lightweight, scaleable and end-to-end trainable mixing layer which enables interpretable predictions in high content imaging while retaining the benefits of deep learning based methods. We employ an extensive set of experiments on both MNIST and RXRX1 datasets, demonstrating that DCMIX learns the biologically relevant channel importance without scarifying prediction performance.Comment: Accepted @ DAGM German Conference on Pattern Recognition (GCPR) 202

    Female Models in AI and the Fight Against COVID-19

    Get PDF
    Gender imbalance has persisted over time and is well documented in science, technology, engineering and mathematics (STEM) and singularly in artificial intelligence (AI). In this article we emphasize the importance of increasing the visibility and recognition of women researchers to attract and retain women in the AI field. We review the ratio of women in STEM and AI, its evolution through time, and the differences among disciplines. Then, we discuss the main sources of this gender imbalance highlighting the lack of female role models and the problems which may arise; such as the so called Marie Curie complex, suvivorship bias, and impostor syndrome. We also emphasize the importance of active participation of women researchers in conferences, providing statistics corresponding with the leading conferences. Finally, we give examples of several prestigious female researchers in the field and we review their research work related to COVID-19 displayed in the workshop “Artificial Intelligence for the Fight Against COVID-19” (AI4FA COVID-19), which is an example of a more balanced participation between genders.AXA Research Fund through the project “Early Prognosis of COVID-19 Infections via Machine Learning” under the Exceptional Flash Call “Mitigating risk in the wake of the COVID-19 pandemic” Basque Government through the project “Mathematical Modeling Applied to Health

    Fine-grained provenance for high-quality data science

    Get PDF
    In this work we analyze the typical operations of data preparation within a machine learning process, and provide infrastructure for generating very granular provenance records from it, at the level of individual elements within a dataset. Our contributions include: (i) the formal definition of a core set of preprocessing operators, (ii) the definition of provenance patterns for each of them, and (iii) a prototype implementation of an application-level provenance capture library that works alongside Python.</p

    Interpretable Scientific Discovery with Symbolic Regression: A Review

    Full text link
    Symbolic regression is emerging as a promising machine learning method for learning succinct underlying interpretable mathematical expressions directly from data. Whereas it has been traditionally tackled with genetic programming, it has recently gained a growing interest in deep learning as a data-driven model discovery method, achieving significant advances in various application domains ranging from fundamental to applied sciences. This survey presents a structured and comprehensive overview of symbolic regression methods and discusses their strengths and limitations
    corecore