835 research outputs found
Binary Classification with Positive Labeling Sources
To create a large amount of training labels for machine learning models
effectively and efficiently, researchers have turned to Weak Supervision (WS),
which uses programmatic labeling sources rather than manual annotation.
Existing works of WS for binary classification typically assume the presence of
labeling sources that are able to assign both positive and negative labels to
data in roughly balanced proportions. However, for many tasks of interest where
there is a minority positive class, negative examples could be too diverse for
developers to generate indicative labeling sources. Thus, in this work, we
study the application of WS on binary classification tasks with positive
labeling sources only. We propose WEAPO, a simple yet competitive WS method for
producing training labels without negative labeling sources. On 10 benchmark
datasets, we show WEAPO achieves the highest averaged performance in terms of
both the quality of synthesized labels and the performance of the final
classifier supervised with these labels. We incorporated the implementation of
\method into WRENCH, an existing benchmarking platform.Comment: CIKM 2022 (short
Neural-Hidden-CRF: A Robust Weakly-Supervised Sequence Labeler
We propose a neuralized undirected graphical model called Neural-Hidden-CRF
to solve the weakly-supervised sequence labeling problem. Under the umbrella of
probabilistic undirected graph theory, the proposed Neural-Hidden-CRF embedded
with a hidden CRF layer models the variables of word sequence, latent ground
truth sequence, and weak label sequence with the global perspective that
undirected graphical models particularly enjoy. In Neural-Hidden-CRF, we can
capitalize on the powerful language model BERT or other deep models to provide
rich contextual semantic knowledge to the latent ground truth sequence, and use
the hidden CRF layer to capture the internal label dependencies.
Neural-Hidden-CRF is conceptually simple and empirically powerful. It obtains
new state-of-the-art results on one crowdsourcing benchmark and three
weak-supervision benchmarks, including outperforming the recent advanced model
CHMM by 2.80 F1 points and 2.23 F1 points in average generalization and
inference performance, respectively.Comment: 13 pages, 4 figures, accepted by SIGKDD-202
BOND: BERT-Assisted Open-Domain Named Entity Recognition with Distant Supervision
We study the open-domain named entity recognition (NER) problem under distant
supervision. The distant supervision, though does not require large amounts of
manual annotations, yields highly incomplete and noisy distant labels via
external knowledge bases. To address this challenge, we propose a new
computational framework -- BOND, which leverages the power of pre-trained
language models (e.g., BERT and RoBERTa) to improve the prediction performance
of NER models. Specifically, we propose a two-stage training algorithm: In the
first stage, we adapt the pre-trained language model to the NER tasks using the
distant labels, which can significantly improve the recall and precision; In
the second stage, we drop the distant labels, and propose a self-training
approach to further improve the model performance. Thorough experiments on 5
benchmark datasets demonstrate the superiority of BOND over existing distantly
supervised NER methods. The code and distantly labeled data have been released
in https://github.com/cliang1453/BOND.Comment: Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery
and Data Mining (KDD '20
Recommended from our members
Machine Learning Models for Efficient and Robust Natural Language Processing
Natural language processing (NLP) has come of age. For example, semantic role labeling (SRL), which automatically annotates sentences with a labeled graph representing who did what to whom, has in the past ten years seen nearly 40% reduction in error, bringing it to useful accuracy. As a result, a myriad of practitioners now want to deploy NLP systems on billions of documents across many domains. However, state-of-the-art NLP systems are typically not optimized for cross-domain robustness nor computational efficiency. In this dissertation I develop machine learning methods to facilitate fast and robust inference across many common NLP tasks.
First, I describe paired learning and inference algorithms for dynamic feature selection which accelerate inference in linear classifiers, the heart of the fastest NLP models, by 5-10 times. I then present iterated dilated convolutional neural networks (ID-CNNs), a distinct combination of network structure, parameter sharing and training procedures that increase inference speed by 14-20 times with accuracy matching bidirectional LSTMs, the most accurate models for NLP sequence labeling. Finally, I describe linguistically-informed self-attention (LISA), a neural network model that combines multi-head self-attention with multi-task learning to facilitate improved generalization to new domains. We show that incorporating linguistic structure in this way leads to substantial improvements over the previous state-of-the-art (syntax-free) neural network models for SRL, especially when evaluating out-of-domain. I conclude with a brief discussion of potential future directions stemming from my thesis work
Semantic Representation and Inference for NLP
Semantic representation and inference is essential for Natural Language
Processing (NLP). The state of the art for semantic representation and
inference is deep learning, and particularly Recurrent Neural Networks (RNNs),
Convolutional Neural Networks (CNNs), and transformer Self-Attention models.
This thesis investigates the use of deep learning for novel semantic
representation and inference, and makes contributions in the following three
areas: creating training data, improving semantic representations and extending
inference learning. In terms of creating training data, we contribute the
largest publicly available dataset of real-life factual claims for the purpose
of automatic claim verification (MultiFC), and we present a novel inference
model composed of multi-scale CNNs with different kernel sizes that learn from
external sources to infer fact checking labels. In terms of improving semantic
representations, we contribute a novel model that captures non-compositional
semantic indicators. By definition, the meaning of a non-compositional phrase
cannot be inferred from the individual meanings of its composing words (e.g.,
hot dog). Motivated by this, we operationalize the compositionality of a phrase
contextually by enriching the phrase representation with external word
embeddings and knowledge graphs. Finally, in terms of inference learning, we
propose a series of novel deep learning architectures that improve inference by
using syntactic dependencies, by ensembling role guided attention heads,
incorporating gating layers, and concatenating multiple heads in novel and
effective ways. This thesis consists of seven publications (five published and
two under review).Comment: PhD thesis, the University of Copenhage
Recommended from our members
Data Scarcity in Event Analysis and Abusive Language Detection
Lack of data is almost always the cause of the suboptimal performance of neural networks. Even though data scarce scenarios can be simulated for any task by assuming limited access to training data, we study two problem areas where data scarcity is a practical challenge: event analysis and abusive content detection} Journalists, social scientists and political scientists need to retrieve and analyze event mentions in unstructured text to compute useful statistical information to understand society. We claim that it is hard to specify information need about events using keyword-based representation and propose a Query by Example (QBE) setting for event retrieval. In the QBE setting, we assume that there are a few example sentences mentioning the event class a user is interested in and we aim to retrieve relevant events using only the examples as a query. Traditional event detection approaches are not applicable in this setting as event detection datasets are constructed based on pre-defined schemas which limits them to a small set of event and event-argument types. Moreover, the amount of annotated data in event detection datasets is limited that only allows us to build a retrieval corpus for evaluation. Thus we assume that there are no relevance judgments to train an event retrieval model -- except for the few examples of a specific event type. We create three QBE evaluation settings from three event detection datasets: PoliceKilling, ACE, and IndiaPoliceEvents. For the PoliceKilling dataset, where a relevant sentence describes a police killing event, we show that a query model constructed from the NLP features extracted from the few given examples is effective compared to event detection baselines. For the ACE dataset, where there are thirty-three types of events, we construct a QBE setting for each type and show that a sentence embedding approach effectively transfers for event matching. Finally, we conducted a unified evaluation of all three datasets using the sentence-embedding-based model and showed that it outperforms strong baselines.
We further examine the effect of data scarcity in abusive language detection. We first study a specific type of abusive language -- hate speech. Neural hate speech detection models trained from one dataset poorly generalize to another dataset from a different domain. This is because characteristics of hate speech vary based on racial and cultural aspects. Our data scarcity scenario assumes that we have a hate speech dataset from a domain and it needs to generalize to a test set from another domain using the unlabeled data from the test domain only. Thus we assume zero target domain data in this scenario. To tackle the data scarcity, we propose an unsupervised domain adaptation approach to augment labeled data for hate speech detection. We evaluate the approach with three different models (character CNNs, BiLSTMs, and BERT) on three different collections. We show our approach improves Area under the Precision/Recall curve by as much as 42% and recall by as much as 278%, with no loss (and in some cases a significant gain) in precision.
Finally, we examine the cross-lingual abusive language detection problem. Abusive language is a superclass of hate speech that includes profanity, aggression, offensiveness, cyberbullying, toxicity, and hate speech itself. There is a large collection of abusive language detection datasets in English such as Jigsaw. For other languages there exist datasets for abusive language detection but with very limited data. We propose a cross-lingual transfer learning approach to learn an effective neural abusive language classifier for such low-resource languages with help from a dataset from a resource-rich language. The framework is based on a nearest-neighbor architecture and is thus interpretable by design. It is a modern instantiation of the classic k-nearest neighbor model, as we use transformer representations in all its components. Unlike prior work on neighborhood-based approaches, we encode the neighborhood information based on query-neighbor interactions. We propose two encoding schemes and show their effectiveness using both qualitative and quantitative analyses. Our evaluation results on eight languages from two different datasets for abusive language detection show sizable improvements in F1 over strong baselines
An automated pipeline for the discovery of conspiracy and conspiracy theory narrative frameworks: Bridgegate, Pizzagate and storytelling on the web
Although a great deal of attention has been paid to how conspiracy theories
circulate on social media and their factual counterpart conspiracies, there has
been little computational work done on describing their narrative structures.
We present an automated pipeline for the discovery and description of the
generative narrative frameworks of conspiracy theories on social media, and
actual conspiracies reported in the news media. We base this work on two
separate repositories of posts and news articles describing the well-known
conspiracy theory Pizzagate from 2016, and the New Jersey conspiracy Bridgegate
from 2013. We formulate a graphical generative machine learning model where
nodes represent actors/actants, and multi-edges and self-loops among nodes
capture context-specific relationships. Posts and news items are viewed as
samples of subgraphs of the hidden narrative network. The problem of
reconstructing the underlying structure is posed as a latent model estimation
problem. We automatically extract and aggregate the actants and their
relationships from the posts and articles. We capture context specific actants
and interactant relationships by developing a system of supernodes and
subnodes. We use these to construct a network, which constitutes the underlying
narrative framework. We show how the Pizzagate framework relies on the
conspiracy theorists' interpretation of "hidden knowledge" to link otherwise
unlinked domains of human interaction, and hypothesize that this multi-domain
focus is an important feature of conspiracy theories. While Pizzagate relies on
the alignment of multiple domains, Bridgegate remains firmly rooted in the
single domain of New Jersey politics. We hypothesize that the narrative
framework of a conspiracy theory might stabilize quickly in contrast to the
narrative framework of an actual one, which may develop more slowly as
revelations come to light.Comment: conspiracy theory, narrative structur
Recommended from our members
Extracting and Representing Entities, Types, and Relations
Making complex decisions in areas like science, government policy, finance, and clinical treatments all require integrating and reasoning over disparate data sources. While some decisions can be made from a single source of information, others require considering multiple pieces of evidence and how they relate to one another. Knowledge graphs (KGs) provide a natural approach for addressing this type of problem: they can serve as long-term stores of abstracted knowledge organized around concepts and their relationships, and can be populated from heterogeneous sources including databases and text. KGs can facilitate higher level reasoning, influence the interpretation of new data, and serve as a scaffolding for knowledge that enhances the acquisition of new information. A symbolic graph over a fixed, human-defined schema encoding facts about entities and their relations is the predominant method for representing knowledge, but this approach is brittle, lacks specificity, and is inevitably highly incomplete. On the other extreme, recent work on purely text-based knowledge models lack abstractions necessary for complex reasoning.
In this thesis I will present work incorporating neural models, rich structured ontologies, and unstructured raw text for representing knowledge. I will first discuss my work enhancing universal schema, a method for learning a latent schema over both existing structured resources and unstructured free text, embedding them jointly within a shared semantic space. Next, I inject additional hierarchical structure into the embedding space of concepts, resulting in more efficient statistical sharing among related concepts and improved accuracy in both fine-grained entity typing and linking. I then present initial work representing knowledge in context, including a single model for extracting all entities and long-range relations simultaneously over full paragraphs while jointly linking these entities to a KG. I will conclude by discussing possible future directions for representing knowledge in context
Distributionally Robust Classification on a Data Budget
Real world uses of deep learning require predictable model behavior under
distribution shifts. Models such as CLIP show emergent natural distributional
robustness comparable to humans, but may require hundreds of millions of
training samples. Can we train robust learners in a domain where data is
limited? To rigorously address this question, we introduce JANuS (Joint
Annotations and Names Set), a collection of four new training datasets with
images, labels, and corresponding captions, and perform a series of carefully
controlled investigations of factors contributing to robustness in image
classification, then compare those results to findings derived from a
large-scale meta-analysis. Using this approach, we show that standard ResNet-50
trained with the cross-entropy loss on 2.4 million image samples can attain
comparable robustness to a CLIP ResNet-50 trained on 400 million samples. To
our knowledge, this is the first result showing (near) state-of-the-art
distributional robustness on limited data budgets. Our dataset is available at
\url{https://huggingface.co/datasets/penfever/JANuS_dataset}, and the code used
to reproduce our experiments can be found at
\url{https://github.com/penfever/vlhub/}.Comment: TMLR 2023; openreview link:
https://openreview.net/forum?id=D5Z2E8CNs
- …