38 research outputs found

    Explaining Neural Networks by Decoding Layer Activations

    Get PDF
    We present a `CLAssifier-DECoder' architecture (\emph{ClaDec}) which facilitates the comprehension of the output of an arbitrary layer in a neural network (NN). It uses a decoder to transform the non-interpretable representation of the given layer to a representation that is more similar to the domain a human is familiar with. In an image recognition problem, one can recognize what information is represented by a layer by contrasting reconstructed images of \emph{ClaDec} with those of a conventional auto-encoder(AE) serving as reference. We also extend \emph{ClaDec} to allow the trade-off between human interpretability and fidelity. We evaluate our approach for image classification using Convolutional NNs. We show that reconstructed visualizations using encodings from a classifier capture more relevant information for classification than conventional AEs. Relevant code is available at \url{https://github.com/JohnTailor/ClaDec

    Large Language Models for Difficulty Estimation of Foreign Language Content with Application to Language Learning

    Full text link
    We use large language models to aid learners enhance proficiency in a foreign language. This is accomplished by identifying content on topics that the user is interested in, and that closely align with the learner's proficiency level in that foreign language. Our work centers on French content, but our approach is readily transferable to other languages. Our solution offers several distinctive characteristics that differentiate it from existing language-learning solutions, such as, a) the discovery of content across topics that the learner cares about, thus increasing motivation, b) a more precise estimation of the linguistic difficulty of the content than traditional readability measures, and c) the availability of both textual and video-based content. The linguistic complexity of video content is derived from the video captions. It is our aspiration that such technology will enable learners to remain engaged in the language-learning process by continuously adapting the topics and the difficulty of the content to align with the learners' evolving interests and learning objectives

    Catheter Ablation of Ventricular Extrasystoles Originating from the Left Coronary Cusp

    Get PDF
    We describe the case of a 55-year-old man with frequent premature ventricular extrasystoles displaying inferior axis and positive QRS concordance in precordial leads. The arrhythmia was successfully ablated from the left coronary cusp. The electrocardiographic and electrophysiological characteristics of this arrhythmia are discussed

    An Interpretable Data Embedding Under Uncertain Distance Information

    Get PDF

    Reflective-net: learning from explanations

    No full text
    Humans possess a remarkable capability to make fast, intuitive decisions, but also to self-reflect, i.e., to explain to oneself, and to efficiently learn from explanations by others. This work provides the first steps toward mimicking this process by capitalizing on the explanations generated based on existing explanation methods, i.e. Grad-CAM. Learning from explanations combined with conventional labeled data yields significant improvements for classification in terms of accuracy and training time

    Personalization of Deep Learning

    No full text
    We discuss training techniques, objectives and metrics toward personalization of deep learning models. In machine learning, personalization addresses the goal of a trained model to target a particular individual by optimizing one or more performance metrics, while conforming to certain constraints. To personalize, we investigate three methods of ``curriculum learning`` and two approaches for data grouping, i.e., augmenting the data of an individual by adding similar data identified with an auto-encoder. We show that both ``curriculuum learning'' and ``personalized'' data augmentation lead to improved performance on data of an individual. Mostly, this comes at the cost of reduced performance on a more general, broader dataset

    Explaining classifiers by constructing familiar concepts

    No full text
    Interpreting a large number of neurons in deep learning is difficult. Our proposed ‘CLAssi- fier-DECoder’ architecture (ClaDec) facilitates the understanding of the output of an arbi- trary layer of neurons or subsets thereof. It uses a decoder that transforms the incompre- hensible representation of the given neurons to a representation that is more similar to the domain a human is familiar with
    corecore