197,558 research outputs found
Graph Based Disambiguation of Named Entities using Linked Data
Identifying entities such as people, organizations, songs, or places in natural language texts is needful for semantic search, machine translation, and information extraction. A key challenge is the ambiguity of entity names, requiring robust methods to disambiguate names to the entities registered in a knowledge base. Several approaches aim to tackle this problem, they still achieve poor accuracy. We address this drawback by presenting a novel knowledge-base-agnostic approach for named entity disambiguation. Our approach includes the HITS algorithm combined with label expansion strategies and string similarity measure like the n-gram similarity. Based on this combination, we can efficiently detect the correct URIs for a given set of named entities within an input text
Dialogue System Augmented with Commonsense Knowledge
Building an open-domain dialog system is a challenging task in current research. In order to successfully maintain a conversation with human, a dialog system must develop many qualities: being engaging, empathetic, show a unique personality and having general knowledge about the world. Prior research has shown that it is possible to develop such chat-bot system that combines these features, but this work explores this problem further. Most state-of-theart dialogue systems are guided by unstructured knowledge such as Wikipedia articles, but there is a lack of research on how structured knowledge bases can be used for open-domain dialogue generation. This work proposes usage of structured knowledge base ConceptNet for knowledge-grounded dialogue generation. Novel knowledge extraction algorithm is proposed which is then used to incorporate knowledge into existing dialogue datasets. Current state-of-theart model BlenderBot is finetuned on new datasets which shows improvement in novelty of utterances generated by the model
Using a Neuro-Fuzzy-Genetic Data Mining Architecture to Determine a Marketing Strategy in a Charitable Organization\u27s Donor Database
This paper describes the use of a neuro-fuzzy-genetic data mining architecture for finding hidden knowledge and modeling the data of the 1997 donation campaign of an American charitable organization. This data was used during the 1998 KDD Cup competition. In the architecture, all input variables are first preprocessed and all continuous variables are fuzzified. Principal component analysis (PCA) is then applied to reduce the dimensions of the input variables in finding combinations of variables, or factors, that describe major trends in the data. The reduced dimensions of the input variables are then used to train probabilistic neural networks (PNN) to classify the dataset according to the groups considered. A rule extraction technique is then applied in order to extract hidden knowledge from the trained neural networks and represent the knowledge in the form of crisp and fuzzy if-then-rules. In the final stage a genetic algorithm is used as a rule-pruning module to eliminate weak rules that are still in the rule base while insuring that the classification accuracy of the rule base is improved or not changed. The pruned rule base helps the charitable organization to maximize the donation and to understand the characteristics of the respondents of the direct mail fund raising campaig
Large-Scale information extraction from textual definitions through deep syntactic and semantic analysis
We present DEFIE, an approach to large-scale Information Extraction (IE) based on a syntactic-semantic analysis of textual definitions. Given a large corpus of definitions we leverage syntactic dependencies to reduce data sparsity, then disambiguate the arguments and content words of the relation strings, and finally exploit the resulting information to organize the acquired relations hierarchically. The output of DEFIE is a high-quality knowledge base consisting of several million automatically acquired semantic relations
Inferring Missing Entity Type Instances for Knowledge Base Completion: New Dataset and Methods
Most of previous work in knowledge base (KB) completion has focused on the
problem of relation extraction. In this work, we focus on the task of inferring
missing entity type instances in a KB, a fundamental task for KB competition
yet receives little attention. Due to the novelty of this task, we construct a
large-scale dataset and design an automatic evaluation methodology. Our
knowledge base completion method uses information within the existing KB and
external information from Wikipedia. We show that individual methods trained
with a global objective that considers unobserved cells from both the entity
and the type side gives consistently higher quality predictions compared to
baseline methods. We also perform manual evaluation on a small subset of the
data to verify the effectiveness of our knowledge base completion methods and
the correctness of our proposed automatic evaluation method.Comment: North American Chapter of the Association for Computational
Linguistics- Human Language Technologies, 201
- …