28,670 research outputs found
Inference of termination conditions for numerical loops in Prolog
We present a new approach to termination analysis of numerical computations
in logic programs. Traditional approaches fail to analyse them due to non
well-foundedness of the integers. We present a technique that allows overcoming
these difficulties. Our approach is based on transforming a program in a way
that allows integrating and extending techniques originally developed for
analysis of numerical computations in the framework of query-mapping pairs with
the well-known framework of acceptability. Such an integration not only
contributes to the understanding of termination behaviour of numerical
computations, but also allows us to perform a correct analysis of such
computations automatically, by extending previous work on a constraint-based
approach to termination. Finally, we discuss possible extensions of the
technique, including incorporating general term orderings.Comment: To appear in Theory and Practice of Logic Programming. To appear in
Theory and Practice of Logic Programmin
Inference of termination conditions for numerical loops
We present a new approach to termination analysis of numerical computations
in logic programs. Traditional approaches fail to analyse them due to non
well-foundedness of the integers. We present a technique that allows to
overcome these difficulties. Our approach is based on transforming a program in
way that allows integrating and extending techniques originally developed for
analysis of numerical computations in the framework of query-mapping pairs with
the well-known framework of acceptability. Such an integration not only
contributes to the understanding of termination behaviour of numerical
computations, but also allows to perform a correct analysis of such
computations automatically, thus, extending previous work on a
constraints-based approach to termination. In the last section of the paper we
discuss possible extensions of the technique, including incorporating general
term orderings.Comment: Presented at WST200
Bridging the Semantic Gap with SQL Query Logs in Natural Language Interfaces to Databases
A critical challenge in constructing a natural language interface to database
(NLIDB) is bridging the semantic gap between a natural language query (NLQ) and
the underlying data. Two specific ways this challenge exhibits itself is
through keyword mapping and join path inference. Keyword mapping is the task of
mapping individual keywords in the original NLQ to database elements (such as
relations, attributes or values). It is challenging due to the ambiguity in
mapping the user's mental model and diction to the schema definition and
contents of the underlying database. Join path inference is the process of
selecting the relations and join conditions in the FROM clause of the final SQL
query, and is difficult because NLIDB users lack the knowledge of the database
schema or SQL and therefore cannot explicitly specify the intermediate tables
and joins needed to construct a final SQL query. In this paper, we propose
leveraging information from the SQL query log of a database to enhance the
performance of existing NLIDBs with respect to these challenges. We present a
system Templar that can be used to augment existing NLIDBs. Our extensive
experimental evaluation demonstrates the effectiveness of our approach, leading
up to 138% improvement in top-1 accuracy in existing NLIDBs by leveraging SQL
query log information.Comment: Accepted to IEEE International Conference on Data Engineering (ICDE)
201
CML: the commonKADS conceptual modelling language
We present a structured language for the specification of knowledge models according to the CommonKADS methodology. This language is called CML (Conceptual Modelling Language) and provides both a structured textual notation and a diagrammatic notation for expertise models. The use of our CML is illustrated by a variety of examples taken from the VT elevator design system
Decomposing feature-level variation with Covariate Gaussian Process Latent Variable Models
The interpretation of complex high-dimensional data typically requires the
use of dimensionality reduction techniques to extract explanatory
low-dimensional representations. However, in many real-world problems these
representations may not be sufficient to aid interpretation on their own, and
it would be desirable to interpret the model in terms of the original features
themselves. Our goal is to characterise how feature-level variation depends on
latent low-dimensional representations, external covariates, and non-linear
interactions between the two. In this paper, we propose to achieve this through
a structured kernel decomposition in a hybrid Gaussian Process model which we
call the Covariate Gaussian Process Latent Variable Model (c-GPLVM). We
demonstrate the utility of our model on simulated examples and applications in
disease progression modelling from high-dimensional gene expression data in the
presence of additional phenotypes. In each setting we show how the c-GPLVM can
extract low-dimensional structures from high-dimensional data sets whilst
allowing a breakdown of feature-level variability that is not present in other
commonly used dimensionality reduction approaches
It Takes (Only) Two: Adversarial Generator-Encoder Networks
We present a new autoencoder-type architecture that is trainable in an
unsupervised mode, sustains both generation and inference, and has the quality
of conditional and unconditional samples boosted by adversarial learning.
Unlike previous hybrids of autoencoders and adversarial networks, the
adversarial game in our approach is set up directly between the encoder and the
generator, and no external mappings are trained in the process of learning. The
game objective compares the divergences of each of the real and the generated
data distributions with the prior distribution in the latent space. We show
that direct generator-vs-encoder game leads to a tight coupling of the two
components, resulting in samples and reconstructions of a comparable quality to
some recently-proposed more complex architectures
Polytool: polynomial interpretations as a basis for termination analysis of Logic programs
Our goal is to study the feasibility of porting termination analysis
techniques developed for one programming paradigm to another paradigm. In this
paper, we show how to adapt termination analysis techniques based on polynomial
interpretations - very well known in the context of term rewrite systems (TRSs)
- to obtain new (non-transformational) ter- mination analysis techniques for
definite logic programs (LPs). This leads to an approach that can be seen as a
direct generalization of the traditional techniques in termination analysis of
LPs, where linear norms and level mappings are used. Our extension general-
izes these to arbitrary polynomials. We extend a number of standard concepts
and results on termination analysis to the context of polynomial
interpretations. We also propose a constraint-based approach for automatically
generating polynomial interpretations that satisfy the termination conditions.
Based on this approach, we implemented a new tool, called Polytool, for
automatic termination analysis of LPs
- …