16,392 research outputs found

    Model Transfer for Tagging Low-resource Languages using a Bilingual Dictionary

    Full text link
    Cross-lingual model transfer is a compelling and popular method for predicting annotations in a low-resource language, whereby parallel corpora provide a bridge to a high-resource language and its associated annotated corpora. However, parallel data is not readily available for many languages, limiting the applicability of these approaches. We address these drawbacks in our framework which takes advantage of cross-lingual word embeddings trained solely on a high coverage bilingual dictionary. We propose a novel neural network model for joint training from both sources of data based on cross-lingual word embeddings, and show substantial empirical improvements over baseline techniques. We also propose several active learning heuristics, which result in improvements over competitive benchmark methods.Comment: 5 pages with 2 pages reference. Accepted to appear in ACL 201

    Transfer Learning across Networks for Collective Classification

    Full text link
    This paper addresses the problem of transferring useful knowledge from a source network to predict node labels in a newly formed target network. While existing transfer learning research has primarily focused on vector-based data, in which the instances are assumed to be independent and identically distributed, how to effectively transfer knowledge across different information networks has not been well studied, mainly because networks may have their distinct node features and link relationships between nodes. In this paper, we propose a new transfer learning algorithm that attempts to transfer common latent structure features across the source and target networks. The proposed algorithm discovers these latent features by constructing label propagation matrices in the source and target networks, and mapping them into a shared latent feature space. The latent features capture common structure patterns shared by two networks, and serve as domain-independent features to be transferred between networks. Together with domain-dependent node features, we thereafter propose an iterative classification algorithm that leverages label correlations to predict node labels in the target network. Experiments on real-world networks demonstrate that our proposed algorithm can successfully achieve knowledge transfer between networks to help improve the accuracy of classifying nodes in the target network.Comment: Published in the proceedings of IEEE ICDM 201

    Learning how to Active Learn: A Deep Reinforcement Learning Approach

    Full text link
    Active learning aims to select a small subset of data for annotation such that a classifier learned on the data is highly accurate. This is usually done using heuristic selection methods, however the effectiveness of such methods is limited and moreover, the performance of heuristics varies between datasets. To address these shortcomings, we introduce a novel formulation by reframing the active learning as a reinforcement learning problem and explicitly learning a data selection policy, where the policy takes the role of the active learning heuristic. Importantly, our method allows the selection policy learned using simulation on one language to be transferred to other languages. We demonstrate our method using cross-lingual named entity recognition, observing uniform improvements over traditional active learning.Comment: To appear in EMNLP 201

    IceCube and HAWC constraints on very-high-energy emission from the Fermi bubbles

    Get PDF
    The nature of the γ\gamma-ray emission from the \emph{Fermi} bubbles is unknown. Both hadronic and leptonic models have been formulated to explain the peculiar γ\gamma-ray signal observed by the Fermi-LAT between 0.1-500~GeV. If this emission continues above ∼\sim30~TeV, hadronic models of the \emph{Fermi} bubbles would provide a significant contribution to the high-energy neutrino flux detected by the IceCube observatory. Even in models where leptonic γ\gamma-rays produce the \emph{Fermi} bubbles flux at GeV energies, a hadronic component may be observable at very high energies. The combination of IceCube and HAWC measurements have the ability to distinguish these scenarios through a comparison of the neutrino and γ\gamma-ray fluxes at a similar energy scale. We examine the most recent four-year dataset produced by the IceCube collaboration and find no evidence for neutrino emission originating from the \emph{Fermi} bubbles. In particular, we find that previously suggested excesses are consistent with the diffuse astrophysical background with a p-value of 0.22 (0.05 in an extreme scenario that all the IceCube events that overlap with the bubbles come from them). Moreover, we show that existing and upcoming HAWC observations provide independent constraints on any neutrino emission from the \emph{Fermi} bubbles, due to the close correlation between the γ\gamma-ray and neutrino fluxes in hadronic interactions. The combination of these results disfavors a significant contribution from the \emph{Fermi} bubbles to the IceCube neutrino flux.Comment: 9 pages, 4 figures, to appear in PR
    • …
    corecore