54,696 research outputs found
Regularization Through Simultaneous Learning: A Case Study on Plant Classification
In response to the prevalent challenge of overfitting in deep neural
networks, this paper introduces Simultaneous Learning, a regularization
approach drawing on principles of Transfer Learning and Multi-task Learning. We
leverage auxiliary datasets with the target dataset, the UFOP-HVD, to
facilitate simultaneous classification guided by a customized loss function
featuring an inter-group penalty. This experimental configuration allows for a
detailed examination of model performance across similar (PlantNet) and
dissimilar (ImageNet) domains, thereby enriching the generalizability of
Convolutional Neural Network models. Remarkably, our approach demonstrates
superior performance over models without regularization and those applying
dropout regularization exclusively, enhancing accuracy by 5 to 22 percentage
points. Moreover, when combined with dropout, the proposed approach improves
generalization, securing state-of-the-art results for the UFOP-HVD challenge.
The method also showcases efficiency with significantly smaller sample sizes,
suggesting its broad applicability across a spectrum of related tasks. In
addition, an interpretability approach is deployed to evaluate feature quality
by analyzing class feature correlations within the network's convolutional
layers. The findings of this study provide deeper insights into the efficacy of
Simultaneous Learning, particularly concerning its interaction with the
auxiliary and target datasets
Zero-Annotation Object Detection with Web Knowledge Transfer
Object detection is one of the major problems in computer vision, and has
been extensively studied. Most of the existing detection works rely on
labor-intensive supervision, such as ground truth bounding boxes of objects or
at least image-level annotations. On the contrary, we propose an object
detection method that does not require any form of human annotation on target
tasks, by exploiting freely available web images. In order to facilitate
effective knowledge transfer from web images, we introduce a multi-instance
multi-label domain adaption learning framework with two key innovations. First
of all, we propose an instance-level adversarial domain adaptation network with
attention on foreground objects to transfer the object appearances from web
domain to target domain. Second, to preserve the class-specific semantic
structure of transferred object features, we propose a simultaneous transfer
mechanism to transfer the supervision across domains through pseudo strong
label generation. With our end-to-end framework that simultaneously learns a
weakly supervised detector and transfers knowledge across domains, we achieved
significant improvements over baseline methods on the benchmark datasets.Comment: Accepted in ECCV 201
Partial Transfer Learning with Selective Adversarial Networks
Adversarial learning has been successfully embedded into deep networks to
learn transferable features, which reduce distribution discrepancy between the
source and target domains. Existing domain adversarial networks assume fully
shared label space across domains. In the presence of big data, there is strong
motivation of transferring both classification and representation models from
existing big domains to unknown small domains. This paper introduces partial
transfer learning, which relaxes the shared label space assumption to that the
target label space is only a subspace of the source label space. Previous
methods typically match the whole source domain to the target domain, which are
prone to negative transfer for the partial transfer problem. We present
Selective Adversarial Network (SAN), which simultaneously circumvents negative
transfer by selecting out the outlier source classes and promotes positive
transfer by maximally matching the data distributions in the shared label
space. Experiments demonstrate that our models exceed state-of-the-art results
for partial transfer learning tasks on several benchmark datasets
- …