324 research outputs found

    Z-graded supergeometry: Differential graded modules, higher algebroid representations, and linear structures

    Get PDF
    The purpose of this thesis is to present a self-standing review of Z\mathbb{Z}- (respectively N\mathbb{N}-)graded supergeometry with emphasis in the development and study of two particular structures therein. Namely, representation theory and linear structures of Q\mathcal{Q}-manifolds and higher Lie algebroids (also known in the mathematics and physics literature as ZQ\mathbb{Z}\mathcal{Q}- and NQ\mathbb{N}\mathcal{Q}-manifolds, respectively). Regarding the first notion, we introduce differential graded modules (or for short DG-modules) of Q\mathcal{Q}-manifolds and the equivalent notion of representations up to homotopy in the case of Lie nn-algebroids (nNn\in\mathbb{N}). These are generalisations of the homonymous structures of the works of Vaintrob, Gracia-Saz and Mehta, and Arias Abad and Crainic, that exist already in the case of ordinary Lie algebroids, i.e. when n=1n=1. The adjoint and coadjoint modules are described, and the corresponding split versions of the adjoint and coadjoint representations up to homotopy of Lie nn-algebroids are explained. In particular, the case of Lie 22-algebroids is analysed in detail. The compatibility of a graded Poisson bracket with the homological vector field on a Z\mathbb{Z}-graded manifold is shown to be equivalent to an (anti-)morphism from the coadjoint module to the adjoint module, leading to an alternative characterisation of non-degeneracy of graded Poisson structures. Applying this result to symplectic Lie 22-algebroids, gives another algebraic characterisation of Courant algebroids in terms of their adjoint and coadjoint representations. In addition, the Weil algebra of a general Q\mathcal{Q}-manifold is defined and is computed explicitly in the case of Lie nn-algebroids over a base (smooth) manifold MM together with a choice of a splitting and linear TMTM-connections. Similarly to the work of Abad and Crainic, our computation involves the coadjoint representation of the Lie nn-algebroid and the induced 22-term representations up to homotopy of the tangent bundle TMTM on the vector bundles of the underlying complex of the Lie nn-algebroid given by the choice of the linear connections. The second object that we define and explore in this work is the linear structures on Z\mathbb{Z}-graded manifolds, for which we see the connection with DG-modules and representations up to homotopy. In the world of split Lie nn-algebroids, this leads to the notion of higher VB-algebroids, which we call VB-Lie nn-algebroids; that is, Lie nn-algebroids that are in some sense linear over another Lie nn-algebroid. We prove that there is an equivalence between the category of VB-Lie nn-algebroids over a Lie nn-algebroid A\underline{A} and the category of (n+1)(n+1)-term representations up to homotopy of A\underline{A}, generalising a well-known result from the theory of ordinary VB-algebroids over Lie algebroids, i.e., in our setting, VB-Lie 11-algebroids over Lie 11-algebroids.2021-06-2

    Model Transparency: Why do we care?

    Get PDF

    Εξίσωση διάχυσης και εφαρμογές στην οικονομία

    Get PDF
    Εθνικό Μετσόβιο Πολυτεχνείο--Μεταπτυχιακή Εργασία. Διεπιστημονικό-Διατμηματικό Πρόγραμμα Μεταπτυχιακών Σπουδών (Δ.Π.Μ.Σ.) “Εφαρμοσμένες Μαθηματικές Επιστήμες

    Principled Diverse Counterfactuals in Multilinear Models

    Get PDF
    Machine learning (ML) applications have automated numerous real-life tasks,improving both private and public life. However, the black-box nature of manystate-of-the-art models poses the challenge of model verification; how can onebe sure that the algorithm bases its decisions on the proper criteria, or that itdoes not discriminate against certain minority groups? In this paper we proposea way to generate diverse counterfactual explanations from multilinear models,a broad class which includes Random Forests, as well as Bayesian Networks.<br/

    Principles and Practice of Explainable Machine Learning

    Get PDF
    Artificial intelligence (AI) provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in diverse areas such as computational biology, law and finance. However, such a highly positive impact is coupled with significant challenges: how do we understand the decisions suggested by these systems in order that we can trust them? In this report, we focus specifically on data-driven methods -- machine learning (ML) and pattern recognition models in particular -- so as to survey and distill the results and observations from the literature. The purpose of this report can be especially appreciated by noting that ML models are increasingly deployed in a wide range of businesses. However, with the increasing prevalence and complexity of methods, business stakeholders in the very least have a growing number of concerns about the drawbacks of models, data-specific biases, and so on. Analogously, data science practitioners are often not aware about approaches emerging from the academic literature, or may struggle to appreciate the differences between different methods, so end up using industry standards such as SHAP. Here, we have undertaken a survey to help industry practitioners (but also data scientists more broadly) understand the field of explainable machine learning better and apply the right tools. Our latter sections build a narrative around a putative data scientist, and discuss how she might go about explaining her models by asking the right questions

    Transparency: from tractability to model explanations

    Get PDF
    As artificial intelligence (AI) and machine learning (ML) models get increasingly incorporated into critical applications, ranging from medical diagnosis to loan approval, they show a tremendous potential to impact society in a beneficial way, however, this is predicated on establishing a transparent relationship between humans and automation. In particular, transparency requirements span across multiple dimensions, incorporating both technical and societal aspects, in order to promote the responsible use of AI/ML. In this thesis we present contributions along both of these axes, starting with the technical side and model transparency, where we study ways to enhance tractable probabilistic models (TPMs) with properties that enable acquiring an in-depth understanding of their decision-making process. Following this, we expand the scope of our work, studying how providing explanations about a model’s predictions influences the extent to which humans understand and collaborate with it, and finally we design an introductory course into the emerging field of explanations in AI to foster the competent use of the developed tools and methodologies. In more detail, the complex design of TPMs makes it very challenging to extract information that conveys meaningful insights, despite the fact that they are closely related to Bayesian networks (BNs), which readily provide such information. This has led to TPMs being viewed as black-boxes, in the sense that their internal representations are elusive, in contrast to BNs. The first part of this thesis challenges this view, focusing on the question of whether it is feasible to extend certain transparent features of BNs to TPMs. We start with considering the problem of transforming TPMs into alternative graphical models in a way that makes their internal representations easy to inspect. Furthermore, we study the utility of existing algorithms in causal applications, where we identify some significant limitations. To remedy this situation, we propose a set of algorithms that result in transformations that accurately uncover the internal representations of TPMs. Following this result, we look into the problem of incorporating probabilistic constraints into TPMs. Although it is well known that BNs satisfy this property, the complex structure of TPMs impedes applying the same arguments, thus advances on this problem have been very limited. Having said that, in this thesis we provide formal proofs that TPMs can be made to satisfy both probabilistic and causal constraints through parameter manipulation, showing that incorporating a constraint corresponds to solving a system of multilinear equations. We conclude the technical contributions studying the problem of generating counterfactual instances for classifiers based on TPMs, motivated by the fact that BNs are the building blocks of most standard approaches to perform this task. In this thesis we propose a novel algorithm that we prove is guaranteed to generate valid counterfactuals. The resulting algorithm takes advantage of the multilinear structure of TPMs, generalizing existing approaches, while also allowing for incorporating a priori constraints that should be respected by the final counterfactuals. In the second part of this thesis we go beyond model transparency, looking into the role of explanations in achieving an effective collaboration between human users and AI. To study this we design a behavioural experiment where we show that explanations provide unique insights, which cannot be obtained by looking at more traditional uncertainty measures. The findings of this experiment provide evidence supporting the view that explanations and uncertainty estimates have complementary functions, advocating in favour of incorporating elements of both in order to promote a synergistic relationship between humans and AI. Finally, building on our findings, in this thesis we design a course on explanations in AI, where we focus on both the technical details of state-of-the-art algorithms as well as the overarching goals, limitations, and methodological approaches in the field. This contribution aims at ensuring that users can make competent use of explanations, a need that has also been highlighted by recent large scale social initiatives. The resulting course was offered by the University of Edinburgh, at an MSc level, where student evaluations, as well as their performance, showcased the course’s effectiveness in achieving its primary goals
    corecore