11 research outputs found
Large-Scale Product Retrieval with Weakly Supervised Representation Learning
Large-scale weakly supervised product retrieval is a practically useful yet
computationally challenging problem. This paper introduces a novel solution for
the eBay Visual Search Challenge (eProduct) held at the Ninth Workshop on
Fine-Grained Visual Categorisation workshop (FGVC9) of CVPR 2022. This
competition presents two challenges: (a) E-commerce is a drastically
fine-grained domain including many products with subtle visual differences; (b)
A lacking of target instance-level labels for model training, with only coarse
category labels and product titles available. To overcome these obstacles, we
formulate a strong solution by a set of dedicated designs: (a) Instead of using
text training data directly, we mine thousands of pseudo-attributes from
product titles and use them as the ground truths for multi-label
classification. (b) We incorporate several strong backbones with advanced
training recipes for more discriminative representation learning. (c) We
further introduce a number of post-processing techniques including whitening,
re-ranking and model ensemble for retrieval enhancement. By achieving 71.53%
MAR, our solution "Involution King" achieves the second position on the
leaderboard.Comment: FGVC9 CVPR202
Unsupervised Hashing via Similarity Distribution Calibration
Existing unsupervised hashing methods typically adopt a feature similarity
preservation paradigm. As a result, they overlook the intrinsic similarity
capacity discrepancy between the continuous feature and discrete hash code
spaces. Specifically, since the feature similarity distribution is
intrinsically biased (e.g., moderately positive similarity scores on negative
pairs), the hash code similarities of positive and negative pairs often become
inseparable (i.e., the similarity collapse problem). To solve this problem, in
this paper a novel Similarity Distribution Calibration (SDC) method is
introduced. Instead of matching individual pairwise similarity scores, SDC
aligns the hash code similarity distribution towards a calibration distribution
(e.g., beta distribution) with sufficient spread across the entire similarity
capacity/range, to alleviate the similarity collapse problem. Extensive
experiments show that our SDC outperforms the state-of-the-art alternatives on
both coarse category-level and instance-level image retrieval tasks, often by a
large margin. Code is available at https://github.com/kamwoh/sdc
CyEDA : CYCLE OBJECT EDGE CONSISTENCY DOMAIN ADAPTATION
Despite the advent of domain adaptation methods, most of\ua0them still struggle in preserving the instance level details of\ua0images when performing global level translation. While there\ua0are instance level translation methods that can retain the instance level details well, most of them require either pre-trainobject detection/segmentation network and annotation labels.\ua0In this work, we propose a novel method namely CyEDA to perform global level domain adaptation that taking care of\ua0image contents without any pre-train networks integration or annotation labels. That is, we introduce masking and cycle-object edge consistency loss which exploit the preservation of\ua0image objects. We show that our approach is able to outperform\ua0other SOTAs in terms of image quality and FID score in\ua0both BDD100K and GTA datasets