5 research outputs found
Nonconvex Generalization of ADMM for Nonlinear Equality Constrained Problems
The ever-increasing demand for efficient and distributed optimization
algorithms for large-scale data has led to the growing popularity of the
Alternating Direction Method of Multipliers (ADMM). However, although the use
of ADMM to solve linear equality constrained problems is well understood, we
lacks a generic framework for solving problems with nonlinear equality
constraints, which are common in practical applications (e.g., spherical
constraints). To address this problem, we are proposing a new generic ADMM
framework for handling nonlinear equality constraints, neADMM. After
introducing the generalized problem formulation and the neADMM algorithm, the
convergence properties of neADMM are discussed, along with its sublinear
convergence rate , where is the number of iterations. Next, two
important applications of neADMM are considered and the paper concludes by
describing extensive experiments on several synthetic and real-world datasets
to demonstrate the convergence and effectiveness of neADMM compared to existing
state-of-the-art methods
Curriculum CycleGAN for Textual Sentiment Domain Adaptation with Multiple Sources
Sentiment analysis of user-generated reviews or comments on products and
services in social networks can help enterprises to analyze the feedback from
customers and take corresponding actions for improvement. To mitigate
large-scale annotations on the target domain, domain adaptation (DA) provides
an alternate solution by learning a transferable model from other labeled
source domains. Existing multi-source domain adaptation (MDA) methods either
fail to extract some discriminative features in the target domain that are
related to sentiment, neglect the correlations of different sources and the
distribution difference among different sub-domains even in the same source, or
cannot reflect the varying optimal weighting during different training stages.
In this paper, we propose a novel instance-level MDA framework, named
curriculum cycle-consistent generative adversarial network (C-CycleGAN), to
address the above issues. Specifically, C-CycleGAN consists of three
components: (1) pre-trained text encoder which encodes textual input from
different domains into a continuous representation space, (2) intermediate
domain generator with curriculum instance-level adaptation which bridges the
gap across source and target domains, and (3) task classifier trained on the
intermediate domain for final sentiment classification. C-CycleGAN transfers
source samples at instance-level to an intermediate domain that is closer to
the target domain with sentiment semantics preserved and without losing
discriminative features. Further, our dynamic instance-level weighting
mechanisms can assign the optimal weights to different source samples in each
training stage. We conduct extensive experiments on three benchmark datasets
and achieve substantial gains over state-of-the-art DA approaches. Our source
code is released at: https://github.com/WArushrush/Curriculum-CycleGAN.Comment: Accepted by WWW 202