113,507 research outputs found
A probabilistic justification for using tf.idf term weighting in information retrieval
This paper presents a new probabilistic model of information retrieval. The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms. This assumption is not made in well known existing models of information retrieval, but is essential in the field of statistical natural language processing. Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf.idf term weighting. The paper shows that the new probabilistic interpretation of tf.idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking. A pilot experiment on the TREC collection shows that the linguistically motivated weighting algorithm outperforms the popular BM25 weighting algorithm
Understanding natural language commands for robotic navigation and mobile manipulation
This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments. Previous approaches have used models with fixed structure to infer the likelihood of a sequence of actions given the environment and the command. In contrast, our framework, called Generalized Grounding Graphs, dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command's hierarchical and compositional semantic structure. Our system performs inference in the model to successfully find and execute plans corresponding to natural language commands such as "Put the tire pallet on the truck." The model is trained using a corpus of commands collected using crowdsourcing. We pair each command with robot actions and use the corpus to learn the parameters of the model. We evaluate the robot's performance by inferring plans from natural language commands, executing each plan in a realistic robot simulator, and asking users to evaluate the system's performance. We demonstrate that our system can successfully follow many natural language commands from the corpus
Probabilistic Bag-Of-Hyperlinks Model for Entity Linking
Many fundamental problems in natural language processing rely on determining
what entities appear in a given text. Commonly referenced as entity linking,
this step is a fundamental component of many NLP tasks such as text
understanding, automatic summarization, semantic search or machine translation.
Name ambiguity, word polysemy, context dependencies and a heavy-tailed
distribution of entities contribute to the complexity of this problem.
We here propose a probabilistic approach that makes use of an effective
graphical model to perform collective entity disambiguation. Input mentions
(i.e.,~linkable token spans) are disambiguated jointly across an entire
document by combining a document-level prior of entity co-occurrences with
local information captured from mentions and their surrounding context. The
model is based on simple sufficient statistics extracted from data, thus
relying on few parameters to be learned.
Our method does not require extensive feature engineering, nor an expensive
training procedure. We use loopy belief propagation to perform approximate
inference. The low complexity of our model makes this step sufficiently fast
for real-time usage. We demonstrate the accuracy of our approach on a wide
range of benchmark datasets, showing that it matches, and in many cases
outperforms, existing state-of-the-art methods
Subcellular localization for Gram Positive and Gram Negative Bacterial Proteins using Linear Interpolation Smoothing Model
Protein subcellular localization is an important topic in proteomics since it is related to a proteins overall function, help in the understanding of metabolic pathways, and in drug design and discovery. In this paper, a basic approximation technique from natural language processing called the linear interpolation smoothing model is applied for predicting protein subcellular localizations. The proposed approach extracts features from syntactical information in protein sequences to build probabilistic profiles using dependency models, which are used in linear interpolation to determine how likely is a sequence to belong to a particular subcellular location. This technique builds a statistical model based on maximum likelihood. It is able to deal effectively with high dimensionality that hinder other traditional classifiers such as Support Vector Machines or k-Nearest Neighbours without sacrificing performance. This approach has been evaluated by predicting subcellular localizations of Gram positive and Gram negative bacterial proteins
Learning Multilingual Semantic Parsers for Question Answering over Linked Data. A comparison of neural and probabilistic graphical model architectures
Hakimov S. Learning Multilingual Semantic Parsers for Question Answering over Linked Data. A comparison of neural and probabilistic graphical model architectures. Bielefeld: Universität Bielefeld; 2019.The task of answering natural language questions over structured data has received wide
interest in recent years. Structured data in the form of knowledge bases has been available
for public usage with coverage on multiple domains. DBpedia and Freebase are such knowledge
bases that include encyclopedic data about multiple domains. However, querying such
knowledge bases requires an understanding of a query language and the underlying ontology,
which requires domain expertise. Querying structured data via question answering systems
that understand natural language has gained popularity to bridge the gap between the data
and the end user.
In order to understand a natural language question, a question answering system needs
to map the question into query representation that can be evaluated given a knowledge base.
An important aspect that we focus in this thesis is the multilinguality. While most research
focused on building monolingual solutions, mainly English, this thesis focuses on building
multilingual question answering systems. The main challenge for processing language input
is interpreting the meaning of questions in multiple languages.
In this thesis, we present three different semantic parsing approaches that learn models
to map questions into meaning representations, into a query in particular, in a supervised
fashion. Each approach differs in the way the model is learned, the features of the model, the
way of representing the meaning and how the meaning of questions is composed. The first
approach learns a joint probabilistic model for syntax and semantics simultaneously from the
labeled data. The second method learns a factorized probabilistic graphical model that builds
on a dependency parse of the input question and predicts the meaning representation that is
converted into a query. The last approach presents a number of different neural architectures
that tackle the task of question answering in end-to-end fashion. We evaluate each approach
using publicly available datasets and compare them with state-of-the-art QA systems
- …