185 research outputs found
Causally Regularized Learning with Agnostic Data Selection Bias
Most of previous machine learning algorithms are proposed based on the i.i.d.
hypothesis. However, this ideal assumption is often violated in real
applications, where selection bias may arise between training and testing
process. Moreover, in many scenarios, the testing data is not even available
during the training process, which makes the traditional methods like transfer
learning infeasible due to their need on prior of test distribution. Therefore,
how to address the agnostic selection bias for robust model learning is of
paramount importance for both academic research and real applications. In this
paper, under the assumption that causal relationships among variables are
robust across domains, we incorporate causal technique into predictive modeling
and propose a novel Causally Regularized Logistic Regression (CRLR) algorithm
by jointly optimize global confounder balancing and weighted logistic
regression. Global confounder balancing helps to identify causal features,
whose causal effect on outcome are stable across domains, then performing
logistic regression on those causal features constructs a robust predictive
model against the agnostic bias. To validate the effectiveness of our CRLR
algorithm, we conduct comprehensive experiments on both synthetic and real
world datasets. Experimental results clearly demonstrate that our CRLR
algorithm outperforms the state-of-the-art methods, and the interpretability of
our method can be fully depicted by the feature visualization.Comment: Oral paper of 2018 ACM Multimedia Conference (MM'18
AMatFormer: Efficient Feature Matching via Anchor Matching Transformer
Learning based feature matching methods have been commonly studied in recent
years. The core issue for learning feature matching is to how to learn (1)
discriminative representations for feature points (or regions) within each
intra-image and (2) consensus representations for feature points across
inter-images. Recently, self- and cross-attention models have been exploited to
address this issue. However, in many scenes, features are coming with
large-scale, redundant and outliers contaminated. Previous
self-/cross-attention models generally conduct message passing on all primal
features which thus lead to redundant learning and high computational cost. To
mitigate limitations, inspired by recent seed matching methods, in this paper,
we propose a novel efficient Anchor Matching Transformer (AMatFormer) for the
feature matching problem. AMatFormer has two main aspects: First, it mainly
conducts self-/cross-attention on some anchor features and leverages these
anchor features as message bottleneck to learn the representations for all
primal features. Thus, it can be implemented efficiently and compactly. Second,
AMatFormer adopts a shared FFN module to further embed the features of two
images into the common domain and thus learn the consensus feature
representations for the matching problem. Experiments on several benchmarks
demonstrate the effectiveness and efficiency of the proposed AMatFormer
matching approach.Comment: Accepted by IEEE Transactions on Multimedia (TMM) 202
Unsupervised Pre-Training of Image Features on Non-Curated Data
International audiencePre-training general-purpose visual features with convolutional neural networks without relying on annotations is a challenging and important task. Most recent efforts in unsupervised feature learning have focused on either small or highly curated datasets like ImageNet, whereas using non-curated raw datasets was found to decrease the feature quality when evaluated on a transfer task. Our goal is to bridge the performance gap between unsupervised methods trained on curated data, which are costly to obtain, and massive raw datasets that are easily available. To that effect, we propose a new unsupervised approach which leverages self-supervision and clustering to capture complementary statistics from large-scale data. We validate our approach on 96 million images from YFCC100M [42], achieving state-of-the-art results among unsupervised methods on standard benchmarks, which confirms the potential of unsupervised learning when only non-curated raw data are available. We also show that pre-training a supervisedVGG-16 with our method achieves 74.9% top-1 classification accuracy on the validation set of ImageNet, which is an improvement of +0.8% over the same network trained from scratch. Our code is available at https://github.com/facebookresearch/DeeperCluster
Blackthorn: Large-Scale Interactive Multimodal Learning
This paper presents Blackthorn, an efficient interactive multimodal learning approach facilitating analysis of multimedia collections of up to 100 million items on a single high-end workstation. Blackthorn features efficient data compression, feature selection, and optimizations to the interactive learning process. The Ratio-64 data representation introduced in this paper only costs tens of bytes per item yet preserves most of the visual and textual semantic information with good accuracy. The optimized interactive learning model scores the Ratio-64-compressed data directly, greatly reducing the computational requirements. The experiments compare Blackthorn with two baselines: Conventional relevance feedback, and relevance feedback using product quantization to compress the features. The results show that Blackthorn is up to 77.5× faster than the conventional relevance feedback alternative, while outperforming the baseline with respect to the relevance of results: It vastly outperforms the baseline on recall over time and reaches up to 108% of its precision. Compared to the product quantization variant, Blackthorn is just as fast, while producing more relevant results. On the full YFCC100M dataset, Blackthorn performs one complete interaction round in roughly 1 s while maintaining adequate relevance of results, thus opening multimedia collections comprising up to 100 million items to fully interactive learning-based analysis
- …