1,586 research outputs found
Unsupervised Neural Machine Translation with SMT as Posterior Regularization
Without real bilingual corpus available, unsupervised Neural Machine
Translation (NMT) typically requires pseudo parallel data generated with the
back-translation method for the model training. However, due to weak
supervision, the pseudo data inevitably contain noises and errors that will be
accumulated and reinforced in the subsequent training process, leading to bad
translation performance. To address this issue, we introduce phrase based
Statistic Machine Translation (SMT) models which are robust to noisy data, as
posterior regularizations to guide the training of unsupervised NMT models in
the iterative back-translation process. Our method starts from SMT models built
with pre-trained language models and word-level translation tables inferred
from cross-lingual embeddings. Then SMT and NMT models are optimized jointly
and boost each other incrementally in a unified EM framework. In this way, (1)
the negative effect caused by errors in the iterative back-translation process
can be alleviated timely by SMT filtering noises from its phrase tables;
meanwhile, (2) NMT can compensate for the deficiency of fluency inherent in
SMT. Experiments conducted on en-fr and en-de translation tasks show that our
method outperforms the strong baseline and achieves new state-of-the-art
unsupervised machine translation performance.Comment: To be presented at AAAI 2019; 9 pages, 4 figure
Lepton flavor violating signals of the neutral top-pion in future lepton colliders
The presence of the top-pions in the low-energy spectrum is
an inevitable feature of the topcolor scenario. Taking into account the
constraints of the present experimental limit of the lepton flavor
violating() process on the free parameters of
topcolor-assisted techicolor(TC2) models, we study the contributions of the
neutral top-pion to the processes (or ), (or ), , and via the
flavor changing () couplings and discuss the
possibility of searching for the signals via these processes in future
lepton colliders.Comment: References added, some typos corrected. Version to be published in
Phys. Rev.
Attacking The Assortativity Coefficient Under A Rewiring Strategy
Degree correlation is an important characteristic of networks, which is
usually quantified by the assortativity coefficient. However, concerns arise
about changing the assortativity coefficient of a network when networks suffer
from adversarial attacks. In this paper, we analyze the factors that affect the
assortativity coefficient and study the optimization problem of maximizing or
minimizing the assortativity coefficient (r) in rewired networks with pairs
of edges. We propose a greedy algorithm and formulate the optimization problem
using integer programming to obtain the optimal solution for this problem.
Through experiments, we demonstrate the reasonableness and effectiveness of our
proposed algorithm. For example, rewired edges 10% in the ER network, the
assortativity coefficient improved by 60%
Glueball Masses from Hamiltonian Lattice QCD
We calculate the masses of the , and glueballs from
QCD in 3+1 dimensions using an eigenvalue equation method for Hamiltonian
lattice QCD developed and described elsewhere by the authors. The mass ratios
become approximately constants in the coupling region ,
from which we estimate and
.Comment: 12 pages, Latex, figures to be sent upon reques
Interpretable Domain-Aware Learning for Neuroimage Classification
In this thesis, we propose three interpretable domain-aware machine learning approaches to
analyse large-scale neuroimaging data from multiple domains, e.g. multiple centres and/or demographic groups. We focus on two questions: how to learn general patterns across domains, and how to learn domain-specific patterns.
Our first approach develops a feature-classifier adaptation framework for semi-supervised domain adaptation on brain decoding tasks. Based on this empirical study, we derive a dependence-based generalisation bound to guide the design of domain-aware learning algorithms. This theoretical result leads to the next two approaches. The covariate-independence regularisation approach is for learning domain-generic patterns. Incorporating hinge and least squares loss generates two covariate-independence regularised classifiers, whose superiority are validated by the experimental results on brain decoding tasks for unsupervised multi-source domain adaptation. The covariate-dependent learning approach is for learning domain-specific patterns, which can learn gender-specific patterns of brain lateralisation via employing the logistic loss.
Interpretability is often essential for neuroimaging tasks. Therefore, all three domain-aware learning approaches are primarily designed to produce linear, interpretable models. These domain-aware learning approaches offer feasible ways to learn interpretable general or specific patterns from multi-domain neuroimaging data for neuroscientists to gain insights. With source code released on GitHub, this work will accelerate data-driven neuroimaging studies and advance multi-source domain adaptation research
- …