4,665 research outputs found
Pix2Map: Cross-modal Retrieval for Inferring Street Maps from Images
Self-driving vehicles rely on urban street maps for autonomous navigation. In
this paper, we introduce Pix2Map, a method for inferring urban street map
topology directly from ego-view images, as needed to continually update and
expand existing maps. This is a challenging task, as we need to infer a complex
urban road topology directly from raw image data. The main insight of this
paper is that this problem can be posed as cross-modal retrieval by learning a
joint, cross-modal embedding space for images and existing maps, represented as
discrete graphs that encode the topological layout of the visual surroundings.
We conduct our experimental evaluation using the Argoverse dataset and show
that it is indeed possible to accurately retrieve street maps corresponding to
both seen and unseen roads solely from image data. Moreover, we show that our
retrieved maps can be used to update or expand existing maps and even show
proof-of-concept results for visual localization and image retrieval from
spatial graphs.Comment: 12 pages, 8 figure
AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
Multimodal contrastive learning aims to train a general-purpose feature
extractor, such as CLIP, on vast amounts of raw, unlabeled paired image-text
data. This can greatly benefit various complex downstream tasks, including
cross-modal image-text retrieval and image classification. Despite its
promising prospect, the security issue of cross-modal pre-trained encoder has
not been fully explored yet, especially when the pre-trained encoder is
publicly available for commercial use.
In this work, we propose AdvCLIP, the first attack framework for generating
downstream-agnostic adversarial examples based on cross-modal pre-trained
encoders. AdvCLIP aims to construct a universal adversarial patch for a set of
natural images that can fool all the downstream tasks inheriting the victim
cross-modal pre-trained encoder. To address the challenges of heterogeneity
between different modalities and unknown downstream tasks, we first build a
topological graph structure to capture the relevant positions between target
samples and their neighbors. Then, we design a topology-deviation based
generative adversarial network to generate a universal adversarial patch. By
adding the patch to images, we minimize their embeddings similarity to
different modality and perturb the sample distribution in the feature space,
achieving unviersal non-targeted attacks. Our results demonstrate the excellent
attack performance of AdvCLIP on two types of downstream tasks across eight
datasets. We also tailor three popular defenses to mitigate AdvCLIP,
highlighting the need for new defense mechanisms to defend cross-modal
pre-trained encoders.Comment: This paper has been accepted by the ACM International Conference on
Multimedia (ACM MM '23, October 29-November 3, 2023, Ottawa, ON, Canada
Person re-identification via efficient inference in fully connected CRF
In this paper, we address the problem of person re-identification problem,
i.e., retrieving instances from gallery which are generated by the same person
as the given probe image. This is very challenging because the person's
appearance usually undergoes significant variations due to changes in
illumination, camera angle and view, background clutter, and occlusion over the
camera network. In this paper, we assume that the matched gallery images should
not only be similar to the probe, but also be similar to each other, under
suitable metric. We express this assumption with a fully connected CRF model in
which each node corresponds to a gallery and every pair of nodes are connected
by an edge. A label variable is associated with each node to indicate whether
the corresponding image is from target person. We define unary potential for
each node using existing feature calculation and matching techniques, which
reflect the similarity between probe and gallery image, and define pairwise
potential for each edge in terms of a weighed combination of Gaussian kernels,
which encode appearance similarity between pair of gallery images. The specific
form of pairwise potential allows us to exploit an efficient inference
algorithm to calculate the marginal distribution of each label variable for
this dense connected CRF. We show the superiority of our method by applying it
to public datasets and comparing with the state of the art.Comment: 7 pages, 4 figure
Cross-Modal Hierarchical Modelling for Fine-Grained Sketch Based Image Retrieval
Sketch as an image search query is an ideal alternative to text in capturing
the fine-grained visual details. Prior successes on fine-grained sketch-based
image retrieval (FG-SBIR) have demonstrated the importance of tackling the
unique traits of sketches as opposed to photos, e.g., temporal vs. static,
strokes vs. pixels, and abstract vs. pixel-perfect. In this paper, we study a
further trait of sketches that has been overlooked to date, that is, they are
hierarchical in terms of the levels of detail -- a person typically sketches up
to various extents of detail to depict an object. This hierarchical structure
is often visually distinct. In this paper, we design a novel network that is
capable of cultivating sketch-specific hierarchies and exploiting them to match
sketch with photo at corresponding hierarchical levels. In particular, features
from a sketch and a photo are enriched using cross-modal co-attention, coupled
with hierarchical node fusion at every level to form a better embedding space
to conduct retrieval. Experiments on common benchmarks show our method to
outperform state-of-the-arts by a significant margin.Comment: Accepted for ORAL presentation in BMVC 202
VGSG: Vision-Guided Semantic-Group Network for Text-based Person Search
Text-based Person Search (TBPS) aims to retrieve images of target pedestrian
indicated by textual descriptions. It is essential for TBPS to extract
fine-grained local features and align them crossing modality. Existing methods
utilize external tools or heavy cross-modal interaction to achieve explicit
alignment of cross-modal fine-grained features, which is inefficient and
time-consuming. In this work, we propose a Vision-Guided Semantic-Group Network
(VGSG) for text-based person search to extract well-aligned fine-grained visual
and textual features. In the proposed VGSG, we develop a Semantic-Group Textual
Learning (SGTL) module and a Vision-guided Knowledge Transfer (VGKT) module to
extract textual local features under the guidance of visual local clues. In
SGTL, in order to obtain the local textual representation, we group textual
features from the channel dimension based on the semantic cues of language
expression, which encourages similar semantic patterns to be grouped implicitly
without external tools. In VGKT, a vision-guided attention is employed to
extract visual-related textual features, which are inherently aligned with
visual cues and termed vision-guided textual features. Furthermore, we design a
relational knowledge transfer, including a vision-language similarity transfer
and a class probability transfer, to adaptively propagate information of the
vision-guided textual features to semantic-group textual features. With the
help of relational knowledge transfer, VGKT is capable of aligning
semantic-group textual features with corresponding visual features without
external tools and complex pairwise interaction. Experimental results on two
challenging benchmarks demonstrate its superiority over state-of-the-art
methods.Comment: Accepted to IEEE TI
- …