107,593 research outputs found
Image Clustering with Contrastive Learning and Multi-scale Graph Convolutional Networks
Deep clustering has recently attracted significant attention. Despite the
remarkable progress, most of the previous deep clustering works still suffer
from two limitations. First, many of them focus on some distribution-based
clustering loss, lacking the ability to exploit sample-wise (or
augmentation-wise) relationships via contrastive learning. Second, they often
neglect the indirect sample-wise structure information, overlooking the rich
possibilities of multi-scale neighborhood structure learning. In view of this,
this paper presents a new deep clustering approach termed Image clustering with
contrastive learning and multi-scale Graph Convolutional Networks (IcicleGCN),
which bridges the gap between convolutional neural network (CNN) and graph
convolutional network (GCN) as well as the gap between contrastive learning and
multi-scale neighborhood structure learning for the image clustering task. The
proposed IcicleGCN framework consists of four main modules, namely, the
CNN-based backbone, the Instance Similarity Module (ISM), the Joint Cluster
Structure Learning and Instance reconstruction Module (JC-SLIM), and the
Multi-scale GCN module (M-GCN). Specifically, with two random augmentations
performed on each image, the backbone network with two weight-sharing views is
utilized to learn the representations for the augmented samples, which are then
fed to ISM and JC-SLIM for instance-level and cluster-level contrastive
learning, respectively. Further, to enforce multi-scale neighborhood structure
learning, two streams of GCNs and an auto-encoder are simultaneously trained
via (i) the layer-wise interaction with representation fusion and (ii) the
joint self-adaptive learning that ensures their last-layer output distributions
to be consistent. Experiments on multiple image datasets demonstrate the
superior clustering performance of IcicleGCN over the state-of-the-art
Learning Transferable Adversarial Robust Representations via Multi-view Consistency
Despite the success on few-shot learning problems, most meta-learned models
only focus on achieving good performance on clean examples and thus easily
break down when given adversarially perturbed samples. While some recent works
have shown that a combination of adversarial learning and meta-learning could
enhance the robustness of a meta-learner against adversarial attacks, they fail
to achieve generalizable adversarial robustness to unseen domains and tasks,
which is the ultimate goal of meta-learning. To address this challenge, we
propose a novel meta-adversarial multi-view representation learning framework
with dual encoders. Specifically, we introduce the discrepancy across the two
differently augmented samples of the same data instance by first updating the
encoder parameters with them and further imposing a novel label-free
adversarial attack to maximize their discrepancy. Then, we maximize the
consistency across the views to learn transferable robust representations
across domains and tasks. Through experimental validation on multiple
benchmarks, we demonstrate the effectiveness of our framework on few-shot
learning tasks from unseen domains, achieving over 10\% robust accuracy
improvements against previous adversarial meta-learning baselines.Comment: *Equal contribution (Author ordering determined by coin flip).
NeurIPS SafetyML workshop 2022, Under revie
Augmented Reality Meets Computer Vision : Efficient Data Generation for Urban Driving Scenes
The success of deep learning in computer vision is based on availability of
large annotated datasets. To lower the need for hand labeled images, virtually
rendered 3D worlds have recently gained popularity. Creating realistic 3D
content is challenging on its own and requires significant human effort. In
this work, we propose an alternative paradigm which combines real and synthetic
data for learning semantic instance segmentation and object detection models.
Exploiting the fact that not all aspects of the scene are equally important for
this task, we propose to augment real-world imagery with virtual objects of the
target category. Capturing real-world images at large scale is easy and cheap,
and directly provides real background appearances without the need for creating
complex 3D models of the environment. We present an efficient procedure to
augment real images with virtual objects. This allows us to create realistic
composite images which exhibit both realistic background appearance and a large
number of complex object arrangements. In contrast to modeling complete 3D
environments, our augmentation approach requires only a few user interactions
in combination with 3D shapes of the target object. Through extensive
experimentation, we conclude the right set of parameters to produce augmented
data which can maximally enhance the performance of instance segmentation
models. Further, we demonstrate the utility of our approach on training
standard deep models for semantic instance segmentation and object detection of
cars in outdoor driving scenes. We test the models trained on our augmented
data on the KITTI 2015 dataset, which we have annotated with pixel-accurate
ground truth, and on Cityscapes dataset. Our experiments demonstrate that
models trained on augmented imagery generalize better than those trained on
synthetic data or models trained on limited amount of annotated real data
iPose: Instance-Aware 6D Pose Estimation of Partly Occluded Objects
We address the task of 6D pose estimation of known rigid objects from single
input images in scenarios where the objects are partly occluded. Recent
RGB-D-based methods are robust to moderate degrees of occlusion. For RGB
inputs, no previous method works well for partly occluded objects. Our main
contribution is to present the first deep learning-based system that estimates
accurate poses for partly occluded objects from RGB-D and RGB input. We achieve
this with a new instance-aware pipeline that decomposes 6D object pose
estimation into a sequence of simpler steps, where each step removes specific
aspects of the problem. The first step localizes all known objects in the image
using an instance segmentation network, and hence eliminates surrounding
clutter and occluders. The second step densely maps pixels to 3D object surface
positions, so called object coordinates, using an encoder-decoder network, and
hence eliminates object appearance. The third, and final, step predicts the 6D
pose using geometric optimization. We demonstrate that we significantly
outperform the state-of-the-art for pose estimation of partly occluded objects
for both RGB and RGB-D input
Teaching complex theoretical multi-step problems in ICT networking through 3D printing and augmented reality
This paper presents a pilot study rationale and research methodology using a mixed media visualisation (3D printing and Augmented Reality simulation) learning intervention to help students in an ICT degree represent theoretical complex multi-step problems without a corresponding real world physical analog model. This is important because these concepts are difficult to visualise without a corresponding mental model. The proposed intervention uses an augmented reality application programmed with free commercially available tools, tested through an action research methodology, to evaluate the effectiveness of the mixed media visualisation techniques to teach ICT students networking. Specifically, 3D models of network equipment will be placed in a field and then the augmented reality app can be used to observe packet traversal and routing between the different devices as data travels from the source to the destination. Outcomes are expected to be an overall improvement in final skill level for all students
On the Importance of Visual Context for Data Augmentation in Scene Understanding
Performing data augmentation for learning deep neural networks is known to be
important for training visual recognition systems. By artificially increasing
the number of training examples, it helps reducing overfitting and improves
generalization. While simple image transformations can already improve
predictive performance in most vision tasks, larger gains can be obtained by
leveraging task-specific prior knowledge. In this work, we consider object
detection, semantic and instance segmentation and augment the training images
by blending objects in existing scenes, using instance segmentation
annotations. We observe that randomly pasting objects on images hurts the
performance, unless the object is placed in the right context. To resolve this
issue, we propose an explicit context model by using a convolutional neural
network, which predicts whether an image region is suitable for placing a given
object or not. In our experiments, we show that our approach is able to improve
object detection, semantic and instance segmentation on the PASCAL VOC12 and
COCO datasets, with significant gains in a limited annotation scenario, i.e.
when only one category is annotated. We also show that the method is not
limited to datasets that come with expensive pixel-wise instance annotations
and can be used when only bounding boxes are available, by employing
weakly-supervised learning for instance masks approximation.Comment: Updated the experimental section. arXiv admin note: substantial text
overlap with arXiv:1807.0742
- …