17,546 research outputs found
Towards Evaluating Explanations of Vision Transformers for Medical Imaging
As deep learning models increasingly find applications in critical domains
such as medical imaging, the need for transparent and trustworthy
decision-making becomes paramount. Many explainability methods provide insights
into how these models make predictions by attributing importance to input
features. As Vision Transformer (ViT) becomes a promising alternative to
convolutional neural networks for image classification, its interpretability
remains an open research question. This paper investigates the performance of
various interpretation methods on a ViT applied to classify chest X-ray images.
We introduce the notion of evaluating faithfulness, sensitivity, and complexity
of ViT explanations. The obtained results indicate that Layerwise relevance
propagation for transformers outperforms Local interpretable model-agnostic
explanations and Attention visualization, providing a more accurate and
reliable representation of what a ViT has actually learned. Our findings
provide insights into the applicability of ViT explanations in medical imaging
and highlight the importance of using appropriate evaluation criteria for
comparing them.Comment: Accepted by XAI4CV Workshop at CVPR 202
Advancing Model Pruning via Bi-level Optimization
The deployment constraints in practical applications necessitate the pruning
of large-scale deep learning models, i.e., promoting their weight sparsity. As
illustrated by the Lottery Ticket Hypothesis (LTH), pruning also has the
potential of improving their generalization ability. At the core of LTH,
iterative magnitude pruning (IMP) is the predominant pruning method to
successfully find 'winning tickets'. Yet, the computation cost of IMP grows
prohibitively as the targeted pruning ratio increases. To reduce the
computation overhead, various efficient 'one-shot' pruning methods have been
developed, but these schemes are usually unable to find winning tickets as good
as IMP. This raises the question of how to close the gap between pruning
accuracy and pruning efficiency? To tackle it, we pursue the algorithmic
advancement of model pruning. Specifically, we formulate the pruning problem
from a fresh and novel viewpoint, bi-level optimization (BLO). We show that the
BLO interpretation provides a technically-grounded optimization base for an
efficient implementation of the pruning-retraining learning paradigm used in
IMP. We also show that the proposed bi-level optimization-oriented pruning
method (termed BiP) is a special class of BLO problems with a bi-linear problem
structure. By leveraging such bi-linearity, we theoretically show that BiP can
be solved as easily as first-order optimization, thus inheriting the
computation efficiency. Through extensive experiments on both structured and
unstructured pruning with 5 model architectures and 4 data sets, we demonstrate
that BiP can find better winning tickets than IMP in most cases, and is
computationally as efficient as the one-shot pruning schemes, demonstrating 2-7
times speedup over IMP for the same level of model accuracy and sparsity.Comment: Thirty-sixth Conference on Neural Information Processing Systems
(NeurIPS 2022
Identifying Student Profiles Within Online Judge Systems Using Explainable Artificial Intelligence
Online Judge (OJ) systems are typically considered within programming-related courses as they yield fast and objective assessments of the code developed by the students. Such an evaluation generally provides a single decision based on a rubric, most commonly whether the submission successfully accomplished the assignment. Nevertheless, since in an educational context such information may be deemed insufficient, it would be beneficial for both the student and the instructor to receive additional feedback about the overall development of the task. This work aims to tackle this limitation by considering the further exploitation of the information gathered by the OJ and automatically inferring feedback for both the student and the instructor. More precisely, we consider the use of learning-based schemes—particularly, Multi-Instance Learning and classical Machine Learning formulations—to model student behaviour. Besides, Explainable Artificial Intelligence is contemplated to provide human-understandable feedback. The proposal has been evaluated considering a case of study comprising 2,500 submissions from roughly 90 different students from a programming-related course in a Computer Science degree. The results obtained validate the proposal: the model is capable of significantly predicting the user outcome (either passing or failing the assignment) solely based on the behavioural pattern inferred by the submissions provided to the OJ. Moreover, the proposal is able to identify prone-to-fail student groups and profiles as well as other relevant information, which eventually serves as feedback to both the student and the instructor.This work has been partially funded by the “Programa Redes-I3CE de investigacion en docencia universitaria del Instituto de Ciencias de la Educacion (REDES-I3CE-2020-5069)” of the University of Alicante. The third author is supported by grant APOSTD/2020/256 from “Programa I+D+I de la Generalitat Valenciana”
Semantic Segmentation Enhanced Transformer Model for Human Attention Prediction
Saliency Prediction aims to predict the attention distribution of human eyes
given an RGB image. Most of the recent state-of-the-art methods are based on
deep image feature representations from traditional CNNs. However, the
traditional convolution could not capture the global features of the image well
due to its small kernel size. Besides, the high-level factors which closely
correlate to human visual perception, e.g., objects, color, light, etc., are
not considered. Inspired by these, we propose a Transformer-based method with
semantic segmentation as another learning objective. More global cues of the
image could be captured by Transformer. In addition, simultaneously learning
the object segmentation simulates the human visual perception, which we would
verify in our investigation of human gaze control in cognitive science. We
build an extra decoder for the subtask and the multiple tasks share the same
Transformer encoder, forcing it to learn from multiple feature spaces. We find
in practice simply adding the subtask might confuse the main task learning,
hence Multi-task Attention Module is proposed to deal with the feature
interaction between the multiple learning targets. Our method achieves
competitive performance compared to other state-of-the-art methods
Neural Architecture Search: Insights from 1000 Papers
In the past decade, advances in deep learning have resulted in breakthroughs
in a variety of areas, including computer vision, natural language
understanding, speech recognition, and reinforcement learning. Specialized,
high-performing neural architectures are crucial to the success of deep
learning in these areas. Neural architecture search (NAS), the process of
automating the design of neural architectures for a given task, is an
inevitable next step in automating machine learning and has already outpaced
the best human-designed architectures on many tasks. In the past few years,
research in NAS has been progressing rapidly, with over 1000 papers released
since 2020 (Deng and Lindauer, 2021). In this survey, we provide an organized
and comprehensive guide to neural architecture search. We give a taxonomy of
search spaces, algorithms, and speedup techniques, and we discuss resources
such as benchmarks, best practices, other surveys, and open-source libraries
Pretrained Embeddings for E-commerce Machine Learning: When it Fails and Why?
The use of pretrained embeddings has become widespread in modern e-commerce
machine learning (ML) systems. In practice, however, we have encountered
several key issues when using pretrained embedding in a real-world production
system, many of which cannot be fully explained by current knowledge.
Unfortunately, we find that there is a lack of a thorough understanding of how
pre-trained embeddings work, especially their intrinsic properties and
interactions with downstream tasks. Consequently, it becomes challenging to
make interactive and scalable decisions regarding the use of pre-trained
embeddings in practice.
Our investigation leads to two significant discoveries about using pretrained
embeddings in e-commerce applications. Firstly, we find that the design of the
pretraining and downstream models, particularly how they encode and decode
information via embedding vectors, can have a profound impact. Secondly, we
establish a principled perspective of pre-trained embeddings via the lens of
kernel analysis, which can be used to evaluate their predictability,
interactively and scalably. These findings help to address the practical
challenges we faced and offer valuable guidance for successful adoption of
pretrained embeddings in real-world production. Our conclusions are backed by
solid theoretical reasoning, benchmark experiments, as well as online testings
Information-Theoretic GAN Compression with Variational Energy-based Model
We propose an information-theoretic knowledge distillation approach for the
compression of generative adversarial networks, which aims to maximize the
mutual information between teacher and student networks via a variational
optimization based on an energy-based model. Because the direct computation of
the mutual information in continuous domains is intractable, our approach
alternatively optimizes the student network by maximizing the variational lower
bound of the mutual information. To achieve a tight lower bound, we introduce
an energy-based model relying on a deep neural network to represent a flexible
variational distribution that deals with high-dimensional images and consider
spatial dependencies between pixels, effectively. Since the proposed method is
a generic optimization algorithm, it can be conveniently incorporated into
arbitrary generative adversarial networks and even dense prediction networks,
e.g., image enhancement models. We demonstrate that the proposed algorithm
achieves outstanding performance in model compression of generative adversarial
networks consistently when combined with several existing models.Comment: Accepted at Neurips202
On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective
ChatGPT is a recent chatbot service released by OpenAI and is receiving
increasing attention over the past few months. While evaluations of various
aspects of ChatGPT have been done, its robustness, i.e., the performance to
unexpected inputs, is still unclear to the public. Robustness is of particular
concern in responsible AI, especially for safety-critical applications. In this
paper, we conduct a thorough evaluation of the robustness of ChatGPT from the
adversarial and out-of-distribution (OOD) perspective. To do so, we employ the
AdvGLUE and ANLI benchmarks to assess adversarial robustness and the Flipkart
review and DDXPlus medical diagnosis datasets for OOD evaluation. We select
several popular foundation models as baselines. Results show that ChatGPT shows
consistent advantages on most adversarial and OOD classification and
translation tasks. However, the absolute performance is far from perfection,
which suggests that adversarial and OOD robustness remains a significant threat
to foundation models. Moreover, ChatGPT shows astounding performance in
understanding dialogue-related texts and we find that it tends to provide
informal suggestions for medical tasks instead of definitive answers. Finally,
we present in-depth discussions of possible research directions.Comment: Technical report; code is at:
https://github.com/microsoft/robustlear
Generalized Relation Modeling for Transformer Tracking
Compared with previous two-stream trackers, the recent one-stream tracking
pipeline, which allows earlier interaction between the template and search
region, has achieved a remarkable performance gain. However, existing
one-stream trackers always let the template interact with all parts inside the
search region throughout all the encoder layers. This could potentially lead to
target-background confusion when the extracted feature representations are not
sufficiently discriminative. To alleviate this issue, we propose a generalized
relation modeling method based on adaptive token division. The proposed method
is a generalized formulation of attention-based relation modeling for
Transformer tracking, which inherits the merits of both previous two-stream and
one-stream pipelines whilst enabling more flexible relation modeling by
selecting appropriate search tokens to interact with template tokens. An
attention masking strategy and the Gumbel-Softmax technique are introduced to
facilitate the parallel computation and end-to-end learning of the token
division module. Extensive experiments show that our method is superior to the
two-stream and one-stream pipelines and achieves state-of-the-art performance
on six challenging benchmarks with a real-time running speed.Comment: Accepted by CVPR 2023. Code and models are publicly available at
https://github.com/Little-Podi/GR
Patching Weak Convolutional Neural Network Models through Modularization and Composition
Despite great success in many applications, deep neural networks are not
always robust in practice. For instance, a convolutional neuron network (CNN)
model for classification tasks often performs unsatisfactorily in classifying
some particular classes of objects. In this work, we are concerned with
patching the weak part of a CNN model instead of improving it through the
costly retraining of the entire model. Inspired by the fundamental concepts of
modularization and composition in software engineering, we propose a compressed
modularization approach, CNNSplitter, which decomposes a strong CNN model for
-class classification into smaller CNN modules. Each module is a
sub-model containing a part of the convolution kernels of the strong model. To
patch a weak CNN model that performs unsatisfactorily on a target class (TC),
we compose the weak CNN model with the corresponding module obtained from a
strong CNN model. The ability of the weak CNN model to recognize the TC can
thus be improved through patching. Moreover, the ability to recognize non-TCs
is also improved, as the samples misclassified as TC could be classified as
non-TCs correctly. Experimental results with two representative CNNs on three
widely-used datasets show that the averaged improvement on the TC in terms of
precision and recall are 12.54% and 2.14%, respectively. Moreover, patching
improves the accuracy of non-TCs by 1.18%. The results demonstrate that
CNNSplitter can patch a weak CNN model through modularization and composition,
thus providing a new solution for developing robust CNN models.Comment: Accepted at ASE'2
- …