27,918 research outputs found
Converting Instance Checking to Subsumption: A Rethink for Object Queries over Practical Ontologies
Efficiently querying Description Logic (DL) ontologies is becoming a vital
task in various data-intensive DL applications. Considered as a basic service
for answering object queries over DL ontologies, instance checking can be
realized by using the most specific concept (MSC) method, which converts
instance checking into subsumption problems. This method, however, loses its
simplicity and efficiency when applied to large and complex ontologies, as it
tends to generate very large MSC's that could lead to intractable reasoning. In
this paper, we propose a revision to this MSC method for DL SHI, allowing it to
generate much simpler and smaller concepts that are specific-enough to answer a
given query. With independence between computed MSC's, scalability for query
answering can also be achieved by distributing and parallelizing the
computations. An empirical evaluation shows the efficacy of our revised MSC
method and the significant efficiency achieved when using it for answering
object queries
The DLV System for Knowledge Representation and Reasoning
This paper presents the DLV system, which is widely considered the
state-of-the-art implementation of disjunctive logic programming, and addresses
several aspects. As for problem solving, we provide a formal definition of its
kernel language, function-free disjunctive logic programs (also known as
disjunctive datalog), extended by weak constraints, which are a powerful tool
to express optimization problems. We then illustrate the usage of DLV as a tool
for knowledge representation and reasoning, describing a new declarative
programming methodology which allows one to encode complex problems (up to
-complete problems) in a declarative fashion. On the foundational
side, we provide a detailed analysis of the computational complexity of the
language of DLV, and by deriving new complexity results we chart a complete
picture of the complexity of this language and important fragments thereof.
Furthermore, we illustrate the general architecture of the DLV system which
has been influenced by these results. As for applications, we overview
application front-ends which have been developed on top of DLV to solve
specific knowledge representation tasks, and we briefly describe the main
international projects investigating the potential of the system for industrial
exploitation. Finally, we report about thorough experimentation and
benchmarking, which has been carried out to assess the efficiency of the
system. The experimental results confirm the solidity of DLV and highlight its
potential for emerging application areas like knowledge management and
information integration.Comment: 56 pages, 9 figures, 6 table
An Economical Approach to Estimate a Benchmark Capital Stock. An Optimal Consistency Method
There are alternative methods of estimating capital stock for a benchmark year. However, these methods are costly and time-consuming, requiring the gathering of much basic information as well as the use of some convenient assumptions and guesses. In addition, a way is needed of checking whether the estimated benchmark is at the correct level. This paper proposes an optimal consistency method (OCM), which enables a capital stock to be estimated for a benchmark year, and which can also be used in checking the consistency of alternative estimates. This method, in contrast to most current approaches, pays due regards both to potential output and to the productivity of capital. It works well, and it requires only small amounts of data, which are readily available. This makes it virtually costless in both time and funding.Benchmark capital, Perpetual Inventory Method (PIM), Potential output, Capital productivity, Optimal Consistency Method (OCM)
Efficient computation of exact solutions for quantitative model checking
Quantitative model checkers for Markov Decision Processes typically use
finite-precision arithmetic. If all the coefficients in the process are
rational numbers, then the model checking results are rational, and so they can
be computed exactly. However, exact techniques are generally too expensive or
limited in scalability. In this paper we propose a method for obtaining exact
results starting from an approximated solution in finite-precision arithmetic.
The input of the method is a description of a scheduler, which can be obtained
by a model checker using finite precision. Given a scheduler, we show how to
obtain a corresponding basis in a linear-programming problem, in such a way
that the basis is optimal whenever the scheduler attains the worst-case
probability. This correspondence is already known for discounted MDPs, we show
how to apply it in the undiscounted case provided that some preprocessing is
done. Using the correspondence, the linear-programming problem can be solved in
exact arithmetic starting from the basis obtained. As a consequence, the method
finds the worst-case probability even if the scheduler provided by the model
checker was not optimal. In our experiments, the calculation of exact solutions
from a candidate scheduler is significantly faster than the calculation using
the simplex method under exact arithmetic starting from a default basis.Comment: In Proceedings QAPL 2012, arXiv:1207.055
Measuring Accuracy of Triples in Knowledge Graphs
An increasing amount of large-scale knowledge graphs have been constructed in recent years. Those graphs are often created from text-based extraction, which could be very noisy. So far, cleaning knowledge graphs are often carried out by human experts and thus very inefficient. It is necessary to explore automatic methods for identifying and eliminating erroneous information. In order to achieve this, previous approaches primarily rely on internal information i.e. the knowledge graph itself. In this paper, we introduce an automatic approach, Triples Accuracy Assessment (TAA), for validating RDF triples (source triples) in a knowledge graph by finding consensus of matched triples (among target triples) from other knowledge graphs. TAA uses knowledge graph interlinks to find identical resources and apply different matching methods between the predicates of source triples and target triples. Then based on the matched triples, TAA calculates a confidence score to indicate the correctness of a source triple. In addition, we present an evaluation of our approach using the FactBench dataset for fact validation. Our findings show promising results for distinguishing between correct and wrong triples
- …