11,570 research outputs found
Discovering Organizational Correlations from Twitter
Organizational relationships are usually very complex in real life. It is
difficult or impossible to directly measure such correlations among different
organizations, because important information is usually not publicly available
(e.g., the correlations of terrorist organizations). Nowadays, an increasing
amount of organizational information can be posted online by individuals and
spread instantly through Twitter. Such information can be crucial for detecting
organizational correlations. In this paper, we study the problem of discovering
correlations among organizations from Twitter. Mining organizational
correlations is a very challenging task due to the following reasons: a) Data
in Twitter occurs as large volumes of mixed information. The most relevant
information about organizations is often buried. Thus, the organizational
correlations can be scattered in multiple places, represented by different
forms; b) Making use of information from Twitter collectively and judiciously
is difficult because of the multiple representations of organizational
correlations that are extracted. In order to address these issues, we propose
multi-CG (multiple Correlation Graphs based model), an unsupervised framework
that can learn a consensus of correlations among organizations based on
multiple representations extracted from Twitter, which is more accurate and
robust than correlations based on a single representation. Empirical study
shows that the consensus graph extracted from Twitter can capture the
organizational correlations effectively.Comment: 11 pages, 4 figure
Finding the direction of disturbance propagation in a chemical process using transfer entropy
Published versio
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Over the last decade, Convolutional Neural Network (CNN) models have been
highly successful in solving complex vision problems. However, these deep
models are perceived as "black box" methods considering the lack of
understanding of their internal functioning. There has been a significant
recent interest in developing explainable deep learning models, and this paper
is an effort in this direction. Building on a recently proposed method called
Grad-CAM, we propose a generalized method called Grad-CAM++ that can provide
better visual explanations of CNN model predictions, in terms of better object
localization as well as explaining occurrences of multiple object instances in
a single image, when compared to state-of-the-art. We provide a mathematical
derivation for the proposed method, which uses a weighted combination of the
positive partial derivatives of the last convolutional layer feature maps with
respect to a specific class score as weights to generate a visual explanation
for the corresponding class label. Our extensive experiments and evaluations,
both subjective and objective, on standard datasets showed that Grad-CAM++
provides promising human-interpretable visual explanations for a given CNN
architecture across multiple tasks including classification, image caption
generation and 3D action recognition; as well as in new settings such as
knowledge distillation.Comment: 17 Pages, 15 Figures, 11 Tables. Accepted in the proceedings of IEEE
Winter Conf. on Applications of Computer Vision (WACV2018). Extended version
is under review at IEEE Transactions on Pattern Analysis and Machine
Intelligenc
Distilling Knowledge from Self-Supervised Teacher by Embedding Graph Alignment
Recent advances have indicated the strengths of self-supervised pre-training
for improving representation learning on downstream tasks. Existing works often
utilize self-supervised pre-trained models by fine-tuning on downstream tasks.
However, fine-tuning does not generalize to the case when one needs to build a
customized model architecture different from the self-supervised model. In this
work, we formulate a new knowledge distillation framework to transfer the
knowledge from self-supervised pre-trained models to any other student network
by a novel approach named Embedding Graph Alignment. Specifically, inspired by
the spirit of instance discrimination in self-supervised learning, we model the
instance-instance relations by a graph formulation in the feature embedding
space and distill the self-supervised teacher knowledge to a student network by
aligning the teacher graph and the student graph. Our distillation scheme can
be flexibly applied to transfer the self-supervised knowledge to enhance
representation learning on various student networks. We demonstrate that our
model outperforms multiple representative knowledge distillation methods on
three benchmark datasets, including CIFAR100, STL10, and TinyImageNet. Code is
here: https://github.com/yccm/EGA.Comment: British Machine Vision Conference (BMVC 2022
- …