781 research outputs found
Ask, and shall you receive?: Understanding Desire Fulfillment in Natural Language Text
The ability to comprehend wishes or desires and their fulfillment is
important to Natural Language Understanding. This paper introduces the task of
identifying if a desire expressed by a subject in a given short piece of text
was fulfilled. We propose various unstructured and structured models that
capture fulfillment cues such as the subject's emotional state and actions. Our
experiments with two different datasets demonstrate the importance of
understanding the narrative and discourse structure to address this task
Recommended from our members
Identifying lexical relationships and entailments with distributional semantics
Many modern efforts in Natural Language Understanding depend on rich and powerful semantic representations of words. Systems for sophisticated logical and textual reasoning often depend heavily on lexical resources to provide critical information about relationships between words, but these lexical resources are expensive to create and maintain, and are never fully comprehensive. Distributional Semantics has long offered methods for automatically inducing meaning representations from large corpora, with little or no annotation efforts. The resulting representations are valuable proxies of semantic similarity, but simply knowing two words are similar cannot tell us their relationship, or whether one entails the other.
In this thesis, we consider how methods from Distributional Semantics may be applied to the difficult task of lexical entailment, where one must predict whether one word implies another. We approach this by showing contributions in areas of hypernymy detection, lexical relationship prediction, lexical substitution, and textual entailment. We propose novel experimental setups, models, analysis, and interpretations, which ultimate provide us with a better understanding of both the nature of lexical entailment, as well as the information available within distributional representations.Computer Science
Exploiting transitivity in probabilistic models for ontology learning
Nel natural language processing (NLP) catturare il significato delle parole è una delle sfide a cui i ricercatori sono largamente interessati.
Le reti semantiche di parole o concetti, che strutturano in modo formale la conoscenza, sono largamente utilizzate in molte applicazioni.
Per essere effettivamente utilizzate, in particolare nei metodi automatici di apprendimento, queste reti semantiche devono essere di grandi dimensioni o almeno strutturare conoscenza di domini molto specifici.
Il nostro principale obiettivo è contribuire alla ricerca di metodi di apprendimento di reti semantiche concentrandosi in differenti aspetti.
Proponiamo un nuovo modello probabilistico per creare o estendere reti semantiche che prende contemporaneamente in considerazine sia le evidenze estratte nel corpus sia la struttura della rete semantiche considerata nel training.
In particolare il nostro modello durante l'apprendimento sfrutta le proprietà strutturali, come la transitività, delle relazioni che legano i nodi della nostra rete.
La formulazione della probabilità che una data relazione tra due istanze appartiene alla rete semantica dipenderà da due probabilità: la probabilità diretta stimata delle evidenze del corpus e la probabilità indotta che deriva delle proprietà strutturali della relazione presa in considerazione.
Il modello che proponiano introduce alcune innovazioni nella stima di queste probabilità.
Proponiamo anche un modello che può essere usato per apprendere conoscenza in differenti domini di interesse senza un grande effort aggiuntivo per l'adattamento.
In particolare, nell'approccio che proponiamo, si apprende un modello da un dominio generico e poi si sfrutta tale modello per estrarre nuova conoscenza in un dominio specifico.
Infine proponiamo Semantic Turkey Ontology Learner (ST-OL): un sistema di apprendimento di ontologie incrementale.
Mediante ontology editor, ST-OL fornisce un efficiente modo di interagire con l'utente finale e inserire le decisioni di tale utente nel loop dell'apprendimento.
Inoltre il modello probabilistico integrato in ST-OL permette di sfruttare la transitività delle relazioni per indurre migliori modelli di estrazione.
Mediante degli esperimenti dimostriamo che tutti i modelli che proponiamo danno un reale contributo ai differenti task che consideriamo migliorando le prestazioni.Capturing word meaning is one of the challenges of natural language processing (NLP). Formal models of meaning such as semantic networks of words or
concepts are knowledge repositories used in a variety of applications. To be
effectively used, these networks have to be large or, at least, adapted to specific
domains. Our main goal is to contribute practically to the research on semantic
networks learning models by covering different aspects of the task.
We propose a novel probabilistic model for learning semantic networks that
expands existing semantic networks taking into accounts both corpus-extracted
evidences and the structure of the generated semantic networks. The model exploits structural properties of target relations such as transitivity during learning. The probability for a given relation instance to belong to the semantic
networks of words depends both on its direct probability and on the induced
probability derived from the structural properties of the target relation. Our
model presents some innovations in estimating these probabilities.
We also propose a model that can be used in different specific knowledge
domains with a small effort for its adaptation. In this approach a model is
learned from a generic domain that can be exploited to extract new informations
in a specific domain.
Finally, we propose an incremental ontology learning system: Semantic
Turkey Ontology Learner (ST-OL). ST-OL addresses two principal issues. The
first issue is an efficient way to interact with final users and, then, to put the
final users decisions in the learning loop. We obtain this positive interaction
using an ontology editor. The second issue is a probabilistic learning semantic
networks of words model that exploits transitive relations for inducing better
extraction models. ST-OL provides a graphical user interface and a human-
computer interaction workflow supporting the incremental leaning loop of our
learning semantic networks of words
Lexicosyntactic Inference in Neural Models
We investigate neural models' ability to capture lexicosyntactic inferences:
inferences triggered by the interaction of lexical and syntactic information.
We take the task of event factuality prediction as a case study and build a
factuality judgment dataset for all English clause-embedding verbs in various
syntactic contexts. We use this dataset, which we make publicly available, to
probe the behavior of current state-of-the-art neural systems, showing that
these systems make certain systematic errors that are clearly visible through
the lens of factuality prediction
- …