403 research outputs found
Deep Supervised Hashing using Symmetric Relative Entropy
By virtue of their simplicity and efficiency, hashing algorithms have achieved significant success on large-scale approximate nearest neighbor search. Recently, many deep neural network based hashing methods have been proposed to improve the search accuracy by simultaneously learning both the feature representation and the binary hash functions. Most deep hashing methods depend on supervised semantic label information for preserving the distance or similarity between local structures, which unfortunately ignores the global distribution of the learned hash codes. We propose a novel deep supervised hashing method that aims to minimize the information loss generated during the embedding process. Specifically, the information loss is measured by the Jensen-Shannon divergence to ensure that compact hash codes have a similar distribution with those from the original images. Experimental results show that our method outperforms current state-of-the-art approaches on two benchmark datasets
Hashing for Similarity Search: A Survey
Similarity search (nearest neighbor search) is a problem of pursuing the data
items whose distances to a query item are the smallest from a large database.
Various methods have been developed to address this problem, and recently a lot
of efforts have been devoted to approximate search. In this paper, we present a
survey on one of the main solutions, hashing, which has been widely studied
since the pioneering work locality sensitive hashing. We divide the hashing
algorithms two main categories: locality sensitive hashing, which designs hash
functions without exploring the data distribution and learning to hash, which
learns hash functions according the data distribution, and review them from
various aspects, including hash function design and distance measure and search
scheme in the hash coding space
Aligning Language Models with Preferences through f-divergence Minimization
Aligning language models with preferences can be posed as approximating a
target distribution representing some desired behavior. Existing approaches
differ both in the functional form of the target distribution and the algorithm
used to approximate it. For instance, Reinforcement Learning from Human
Feedback (RLHF) corresponds to minimizing a reverse KL from an implicit target
distribution arising from a KL penalty in the objective. On the other hand,
Generative Distributional Control (GDC) has an explicit target distribution and
minimizes a forward KL from it using the Distributional Policy Gradient (DPG)
algorithm. In this paper, we propose a new approach, f-DPG, which allows the
use of any f-divergence to approximate any target distribution. f-DPG unifies
both frameworks (RLHF, GDC) and the approximation methods (DPG, RL with KL
penalties). We show the practical benefits of various choices of divergence
objectives and demonstrate that there is no universally optimal objective but
that different divergences are good for approximating different targets. For
instance, we discover that for GDC, the Jensen-Shannon divergence frequently
outperforms forward KL divergence by a wide margin, leading to significant
improvements over prior work
Efficient Learning Framework for Training Deep Learning Models with Limited Supervision
In recent years, deep learning has shown tremendous success in different applications, however these modes mostly need a large labeled dataset for training their parameters. In this work, we aim to explore the potentials of efficient learning frameworks for training deep models on different problems in the case of limited supervision or noisy labels.
For the image clustering problem, we introduce a new deep convolutional autoencoder with an unsupervised learning framework. We employ a relative entropy minimization as the clustering objective regularized by the frequency of cluster assignments and a reconstruction loss.
In the case of noisy labels obtained by crowdsourcing platforms, we proposed a novel deep hybrid model for sentiment analysis of text data like tweets based on noisy crowd labels. The proposed model consists of a crowdsourcing aggregation model and a deep text autoencoder. We combine these sub-models based on a probabilistic framework rather than a heuristic way, and derive an efficient optimization algorithm to jointly solve the corresponding problem.
In order to improve the performance of unsupervised deep hash functions on image similarity search in big datasets, we adopt generative adversarial networks to propose a new deep image retrieval model, where the adversarial loss is employed as a data-dependent regularization in our objective function.
We also introduce a balanced self-paced learning algorithm for training a GAN-based model for image clustering, where the input samples are gradually included into training from easy to difficult, while the diversity of selected samples from all clusters are also considered.
In addition, we explore adopting discriminative approaches for unsupervised visual representation learning rather than the generative algorithms, such as maximizing the mutual information between an input image and its representation and a contrastive loss for decreasing the distance between the representations of original and augmented image data
곡λ λμ‘°μ νμ΅μ μ΄μ©ν λΉμ§λ λλ©μΈ μ μ κΈ°λ² μ°κ΅¬
νμλ
Όλ¬Έ (μμ¬) -- μμΈλνκ΅ λνμ : 곡과λν μ κΈ°Β·μ 보곡νλΆ, 2021. 2. μ€μ±λ‘.Domain adaptation is introduced to exploit the label information of source domain when labels are not available for target domain. Previous methods minimized domain discrepancy in a latent space to enable transfer learning. These studies are based on the theoretical analysis that the target error is upper bounded by the sum of source error, the domain discrepancy, and the joint error of the ideal hypothesis. However, feature discriminability is sacrificed while enhancing the feature transferability by matching marginal distributions. In particular, the ideal joint hypothesis error in the target error upper bound, which was previously considered to be minute, has been found to be significant, impairing its theoretical guarantee.
In this paper, to manage the joint error, we propose an alternative upper bound on the target error that explicitly considers it. Based on the theoretical analysis, we suggest a joint optimization framework that combines the source and target domains. To minimize the joint error, we further introduce Joint Contrastive Learning (JCL) that finds class-level discriminative features. With a solid theoretical framework, JCL employs contrastive loss to maximize the mutual information between a feature and its label, which is equivalent to maximizing the Jensen-Shannon divergence between conditional distributions. Extensive experiments on domain adaptation datasets demonstrate that JCL outperforms existing state-of-the-art methods.λλ©μΈ μ μ κΈ°λ²μ νκ² λλ©μΈμ λΌλ²¨ μ λ³΄κ° μλ μν©μμ λΉμ·ν λλ©μΈμΈ μμ€ λλ©μΈμ λΌλ²¨ μ 보λ₯Ό νμ©νκΈ° μν΄ κ°λ°λμλ€. κΈ°μ‘΄μ λ°©λ²λ‘ λ€μ μ μ¬ κ³΅κ°μμ λλ©μΈλ€ μ¬μ΄μ λΆν¬ μ°¨μ΄λ₯Ό μ€μμΌλ‘μ¨ μ μ΄ νμ΅μ΄ κ°λ₯νκ² νμλ€. μ΄λ¬ν κΈ°λ²λ€μ μμ€ λλ©μΈμ μλ¬μ¨, λλ©μΈ κ° λΆν¬ μ°¨μ΄, κ·Έλ¦¬κ³ μ λλ©μΈμμ μ΄μμ μΈ λΆλ₯κΈ°μ μλ¬μ¨μ ν©μ΄ νκ² λλ©μΈμ μλ¬μ¨μ μκ³κ° λλ€λ μ΄λ‘ μ λ°νμΌλ‘ νλ€. κ·Έλ¬λ λλ©μΈλ€ μ¬μ΄μ λΆν¬ μ°¨μ΄λ₯Ό μ€μ΄λ λ°©λ²λ€μ λμμ μ μ¬ κ³΅κ°μμ μλ‘ λ€λ₯Έ λΌλ²¨μ κ°λ λ°μ΄ν°λ€ μ¬μ΄μ ꡬλ³μ±μ κ°μμμΌ°λ€. νΉν, μμ κ²μ΄λΌ μκ°λλ μ λλ©μΈμμ μ΄μμ μΈ λΆλ₯κΈ°μ μλ¬μ¨μ΄ ν° κ²μΌλ‘ λνλ¬λ€.
λ³Έ λ
Όλ¬Έμμλ κΈ°μ‘΄μ μ΄λ‘ μμλ λ€λ£¨μ§ μμ μ λλ©μΈμμ λΆλ₯κΈ°μ μλ¬μ¨μ μ‘°μ ν μ μκ²νκΈ° μν΄ μλ‘μ΄ μ΄λ‘ μ μ μνλ€. μ΄ μ΄λ‘ μ λ°°κ²½μ λ°νμΌλ‘ μμ€ λλ©μΈκ³Ό νκ² λλ©μΈμ ν¨κ» νμ΅νλ 곡λ λμ‘°μ λ°©λ²μ μκ°νλ€. λ³Έ 곡λ λμ‘°μ νμ΅ λ°©λ²μμλ κ° λΌλ²¨λ³λ‘ ꡬλΆλλ μ μ¬ κ³΅κ°μ νμ΅νκΈ° μν΄ κ° λ°μ΄ν°μ νΉμ§κ³Ό λΌλ²¨ μ¬μ΄μ μνΈ μ 보λμ μ΅λννλ€. μ΄ κ° λ°μ΄ν°μ νΉμ§κ³Ό λΌλ²¨ μ¬μ΄μ μνΈ μ 보λμ κ° λΌλ²¨ λΆν¬ μ¬μ΄μ μ μΌ-μ€λ
Ό 거리μ κ°μΌλ―λ‘ μ΄λ₯Ό μ΅λννλ κ²μ 곧 λΌλ²¨λ€μ΄ μ ꡬλ³λλ μ μ¬ κ³΅κ°μ νμ΅νλ κ²μ΄λ€. λ§μ§λ§μΌλ‘ 곡λ λμ‘°μ νμ΅ λ°©λ²μ μ¬λ¬ λ°μ΄ν° μ
μ μ μ©νμ¬ κΈ°μ‘΄ λ°©λ²λ‘ λ€κ³Ό λΉκ΅νμλ€.1 Introduction 1
2 Background 4
2.1 Domain Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.1 Problem Setting and Notations . . . . . . . . . . . . . . . . . 4
2.1.2 Theoretical Background . . . . . . . . . . . . . . . . . . . . 5
2.2 Approaches for Domain Adaptation . . . . . . . . . . . . . . . . . . 6
2.2.1 Marginal Distribution Alignment Based Approaches . . . . . 6
2.2.2 Conditional Distribution Matching Approaches . . . . . . . . 7
2.3 Contrastive Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Method 10
3.1 An Alternative Upper Bound . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Joint Contrastive Learning . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.1 Theoretical Guarantees . . . . . . . . . . . . . . . . . . . . . 14
3.2.2 Generalization to Multiclass Classification . . . . . . . . . . 17
3.2.3 Training Procedure . . . . . . . . . . . . . . . . . . . . . . . 19
4 Experiments 24
4.1 Datasets and Baselines . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.4 Ablation Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5 Conclusion 35
Abstract (In Korean) 45Maste
- β¦