4 research outputs found
Discrete Multi-modal Hashing with Canonical Views for Robust Mobile Landmark Search
Mobile landmark search (MLS) recently receives increasing attention for its
great practical values. However, it still remains unsolved due to two important
challenges. One is high bandwidth consumption of query transmission, and the
other is the huge visual variations of query images sent from mobile devices.
In this paper, we propose a novel hashing scheme, named as canonical view based
discrete multi-modal hashing (CV-DMH), to handle these problems via a novel
three-stage learning procedure. First, a submodular function is designed to
measure visual representativeness and redundancy of a view set. With it,
canonical views, which capture key visual appearances of landmark with limited
redundancy, are efficiently discovered with an iterative mining strategy.
Second, multi-modal sparse coding is applied to transform visual features from
multiple modalities into an intermediate representation. It can robustly and
adaptively characterize visual contents of varied landmark images with certain
canonical views. Finally, compact binary codes are learned on intermediate
representation within a tailored discrete binary embedding model which
preserves visual relations of images measured with canonical views and removes
the involved noises. In this part, we develop a new augmented Lagrangian
multiplier (ALM) based optimization method to directly solve the discrete
binary codes. We can not only explicitly deal with the discrete constraint, but
also consider the bit-uncorrelated constraint and balance constraint together.
Experiments on real world landmark datasets demonstrate the superior
performance of CV-DMH over several state-of-the-art methods
Predicting Visual Overlap of Images Through Interpretable Non-Metric Box Embeddings
To what extent are two images picturing the same 3D surfaces? Even when this
is a known scene, the answer typically requires an expensive search across
scale space, with matching and geometric verification of large sets of local
features. This expense is further multiplied when a query image is evaluated
against a gallery, e.g. in visual relocalization. While we don't obviate the
need for geometric verification, we propose an interpretable image-embedding
that cuts the search in scale space to essentially a lookup.
Our approach measures the asymmetric relation between two images. The model
then learns a scene-specific measure of similarity, from training examples with
known 3D visible-surface overlaps. The result is that we can quickly identify,
for example, which test image is a close-up version of another, and by what
scale factor. Subsequently, local features need only be detected at that scale.
We validate our scene-specific model by showing how this embedding yields
competitive image-matching results, while being simpler, faster, and also
interpretable by humans.Comment: ECCV 202