132,616 research outputs found
DEPHN: Different Expression Parallel Heterogeneous Network using virtual gradient optimization for Multi-task Learning
Recommendation system algorithm based on multi-task learning (MTL) is the
major method for Internet operators to understand users and predict their
behaviors in the multi-behavior scenario of platform. Task correlation is an
important consideration of MTL goals, traditional models use shared-bottom
models and gating experts to realize shared representation learning and
information differentiation. However, The relationship between real-world tasks
is often more complex than existing methods do not handle properly sharing
information. In this paper, we propose an Different Expression Parallel
Heterogeneous Network (DEPHN) to model multiple tasks simultaneously. DEPHN
constructs the experts at the bottom of the model by using different feature
interaction methods to improve the generalization ability of the shared
information flow. In view of the model's differentiating ability for different
task information flows, DEPHN uses feature explicit mapping and virtual
gradient coefficient for expert gating during the training process, and
adaptively adjusts the learning intensity of the gated unit by considering the
difference of gating values and task correlation. Extensive experiments on
artificial and real-world datasets demonstrate that our proposed method can
capture task correlation in complex situations and achieve better performance
than baseline models\footnote{Accepted in IJCNN2023}
Weighted Semi-Supervised Approaches for Predictive Modeling and Truth Discovery
Multi-View Learning (MVL) is a framework which combines data from heteroge- neous sources in an efficient manner in which the different views learn from each other, thereby improving the overall prediction of the task. By not combining the data from different views together, we preserve the underlying statistical property of each view thereby learning from data in their original feature space. Additionally, MVL also mitigates the problem of high dimensionality when data from multiple sources are integrated. We have exploited this property of MVL to predict chemical-target and drug-disease associations. Every chemical or drug can be represented in diverse feature spaces that could be viewed as multiple views. Similarly multi-task learning (MTL) frameworks enables the joint learning of related tasks that improves the overall performances of the tasks than learning them individually. This factor allows us to learn related targets and related diseases together. An empirical study has been carried out to study the combined effects of multi-view multi-task learning (MVMTL) to pre- dict chemical-target interactions and drug-disease associations. The first half of the thesis focuses on two methods that closely resemble MVMTL. We first explain the weighted Multi-View learning (wMVL) framework that systemat- ically learns from heterogeneous data sources by weighting the views in terms of their predictive power. We extend the work to include multi-task learning and formulate the second method called Multi-Task with weighted Multi-View Learning (MTwMVL). The performance of these two methods have been evaluated by cheminformatics data sets. iiWe change gears for the second part of this thesis towards truth discovery (TD). Truth discovery closely resembles a multi-view setting but the two strongly differ in certain aspects. While the underlying assumption in multi-view learning is that the different views have label consistency, truth finding differs in its setup where the main objective is to find the true value of an object given that different sources might conflict with each other and claim different values for that object. The sources could be considered as views and the primary strategy in truth finding is to estimate the reliability of each source and its contribution to the truth. There are many methods that address various challenges and aspects of truth discovery and we have in this thesis looked at TD in a semi-supervised setting. As the third contribution to this dissertation, we adopt a semi-supervised truth dis- covery framework in which we consider the labeled objects and unlabeled objects as two closely related tasks with one task having strong labels while the other task hav- ing weak labels. We show that a small set of ground truth helps in achieving better accuracy than the unsupervised methods
Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective
This paper takes a problem-oriented perspective and presents a comprehensive
review of transfer learning methods, both shallow and deep, for cross-dataset
visual recognition. Specifically, it categorises the cross-dataset recognition
into seventeen problems based on a set of carefully chosen data and label
attributes. Such a problem-oriented taxonomy has allowed us to examine how
different transfer learning approaches tackle each problem and how well each
problem has been researched to date. The comprehensive problem-oriented review
of the advances in transfer learning with respect to the problem has not only
revealed the challenges in transfer learning for visual recognition, but also
the problems (e.g. eight of the seventeen problems) that have been scarcely
studied. This survey not only presents an up-to-date technical review for
researchers, but also a systematic approach and a reference for a machine
learning practitioner to categorise a real problem and to look up for a
possible solution accordingly
Transfer Learning for Speech and Language Processing
Transfer learning is a vital technique that generalizes models trained for
one setting or task to other settings or tasks. For example in speech
recognition, an acoustic model trained for one language can be used to
recognize speech in another language, with little or no re-training data.
Transfer learning is closely related to multi-task learning (cross-lingual vs.
multilingual), and is traditionally studied in the name of `model adaptation'.
Recent advance in deep learning shows that transfer learning becomes much
easier and more effective with high-level abstract features learned by deep
models, and the `transfer' can be conducted not only between data distributions
and data types, but also between model structures (e.g., shallow nets and deep
nets) or even model types (e.g., Bayesian models and neural models). This
review paper summarizes some recent prominent research towards this direction,
particularly for speech and language processing. We also report some results
from our group and highlight the potential of this very interesting research
field.Comment: 13 pages, APSIPA 201
- …