158 research outputs found
aspcud: A Linux Package Configuration Tool Based on Answer Set Programming
We present the Linux package configuration tool aspcud based on Answer Set
Programming. In particular, we detail aspcud's preprocessor turning a CUDF
specification into a set of logical facts.Comment: In Proceedings LoCoCo 2011, arXiv:1108.609
Answer Set Programming Modulo `Space-Time'
We present ASP Modulo `Space-Time', a declarative representational and
computational framework to perform commonsense reasoning about regions with
both spatial and temporal components. Supported are capabilities for mixed
qualitative-quantitative reasoning, consistency checking, and inferring
compositions of space-time relations; these capabilities combine and synergise
for applications in a range of AI application areas where the processing and
interpretation of spatio-temporal data is crucial. The framework and resulting
system is the only general KR-based method for declaratively reasoning about
the dynamics of `space-time' regions as first-class objects. We present an
empirical evaluation (with scalability and robustness results), and include
diverse application examples involving interpretation and control tasks
Revisiting the Training of Logic Models of Protein Signaling Networks with a Formal Approach based on Answer Set Programming
A fundamental question in systems biology is the construction and training to
data of mathematical models. Logic formalisms have become very popular to model
signaling networks because their simplicity allows us to model large systems
encompassing hundreds of proteins. An approach to train (Boolean) logic models
to high-throughput phospho-proteomics data was recently introduced and solved
using optimization heuristics based on stochastic methods. Here we demonstrate
how this problem can be solved using Answer Set Programming (ASP), a
declarative problem solving paradigm, in which a problem is encoded as a
logical program such that its answer sets represent solutions to the problem.
ASP has significant improvements over heuristic methods in terms of efficiency
and scalability, it guarantees global optimality of solutions as well as
provides a complete set of solutions. We illustrate the application of ASP with
in silico cases based on realistic networks and data
System aspmt2smt: Computing ASPMT Theories by SMT Solvers
Abstract. Answer Set Programming Modulo Theories (ASPMT) is an approach to combining answer set programming and satisfiability modulo theories based on the functional stable model semantics. It is shown that the tight fragment of ASPMT programs can be turned into SMT instances, thereby allowing SMT solvers to compute stable models of ASPMT programs. In this paper we present a compiler called ASPSMT2SMT, which implements this translation. The system uses ASP grounder GRINGO and SMT solver Z3. GRINGO partially grounds input programs while leaving some variables to be processed by Z3. We demonstrate that the system can effectively handle real number computations for reasoning about continuous changes.
Flexible graph matching and graph edit distance using answer set programming
The graph isomorphism, subgraph isomorphism, and graph edit distance problems
are combinatorial problems with many applications. Heuristic exact and
approximate algorithms for each of these problems have been developed for
different kinds of graphs: directed, undirected, labeled, etc. However,
additional work is often needed to adapt such algorithms to different classes
of graphs, for example to accommodate both labels and property annotations on
nodes and edges. In this paper, we propose an approach based on answer set
programming. We show how each of these problems can be defined for a general
class of property graphs with directed edges, and labels and key-value
properties annotating both nodes and edges. We evaluate this approach on a
variety of synthetic and realistic graphs, demonstrating that it is feasible as
a rapid prototyping approach.Comment: To appear, PADL 202
ODRL Policy Modelling and Compliance Checking
This paper addresses the problem of constructing a policy pipeline that enables compliance checking of business processes against regulatory obligations. Towards this end, we propose an Open Digital Rights Language (ODRL) profile that can be used to capture the semantics of both business policies in the form of sets of required permissions and regulatory requirements in the form of deontic concepts, and present their translation into Answer Set Programming (via the Institutional Action Language (InstAL)) for compliance checking purposes. The result of the compliance checking is either a positive compliance result or an explanation pertaining to the aspects of the policy that are causing the noncompliance. The pipeline is illustrated using two (key) fragments of the General Data Protect Regulation, namely Articles 6 (Lawfulness of processing) and Articles 46 (Transfers subject to appropriate safeguards) and industrially-relevant use cases that involve the specification of sets of permissions that are needed to execute business processes. The core contributions of this paper are the ODRL profile, which is capable of modelling regulatory obligations and business policies, the exercise of modelling elements of GDPR in this semantic formalism, and the operationalisation of the model to demonstrate its capability to support personal data processing compliance checking, and a basis for explaining why the request is deemed compliant or not
Meneco, a Topology-Based Gap-Filling Tool Applicable to Degraded Genome-Wide Metabolic Networks
International audienceIncreasing amounts of sequence data are becoming available for a wide range of non-model organisms. Investigating and modelling the metabolic behaviour of those organisms is highly relevant to understand their biology and ecology. As sequences are often incomplete and poorly annotated, draft networks of their metabolism largely suffer from incompleteness. Appropriate gap-filling methods to identify and add missing reactions are therefore required to address this issue. However, current tools rely on phenotypic or taxonomic information, or are very sensitive to the stoichiometric balance of metabolic reactions, especially concerning the co-factors. This type of information is often not available or at least prone to errors for newly-explored organisms. Here we introduce Meneco, a tool dedicated to the topological gap-filling of genome-scale draft metabolic networks. Meneco reformulates gap-filling as a qualitative combinatorial optimization problem, omitting constraints raised by the stoichiometry of a metabolic network considered in other methods, and solves this problem using Answer Set Programming. Run on several artificial test sets gathering 10,800 degraded Escherichia coli networks Meneco was able to efficiently identify essential reactions missing in networks at high degradation rates, outperforming the stoichiometry-based tools in scalability. To demonstrate the utility of Meneco we applied it to two case studies. Its application to recent metabolic networks reconstructed for the brown algal model Ectocarpus siliculosus and an associated bacterium Candidatus Phaeomarinobacter ectocarpi revealed several candidate metabolic pathways for algal-bacterial interactions. Then Meneco was used to reconstruct, from transcriptomic and metabolomic data, the first metabolic network for the microalga Euglena mutabilis. These two case studies show that Meneco is a versatile tool to complete draft genome-scale metabolic networks produced from heterogeneous data, and to suggest relevant reactions that explain the metabolic capacity of a biological system
BWIBots: A platform for bridging the gap between AI and humanârobot interaction research
Recent progress in both AI and robotics have enabled the development of general purpose robot platforms that are capable of executing a wide variety of complex, temporally extended service tasks in open environments. This article introduces a novel, custom-designed multi-robot platform for research on AI, robotics, and especially humanârobot interaction for service robots. Called BWIBots, the robots were designed as a part of the Building-Wide Intelligence (BWI) project at the University of Texas at Austin. The article begins with a description of, and justification for, the hardware and software design decisions underlying the BWIBots, with the aim of informing the design of such platforms in the future. It then proceeds to present an overview of various research contributions that have enabled the BWIBots to better (a) execute action sequences to complete user requests, (b) efficiently ask questions to resolve user requests, (c) understand human commands given in natural language, and (d) understand human intention from afar. The article concludes with a look forward towards future research opportunities and applications enabled by the BWIBot platform
ASP, Amalgamation and the Conceptual Blending Workflow
We present a framework for conceptual blending â a concept invention method that is advocated in cognitive science as a fundamental, and uniquely human engine for creative thinking. Herein, we employ the search capabilities of ASP to find commonalities among input concepts as part of the blending process, and we show how our approach fits within a generalised conceptual blending workflow. Specifically, we orchestrate ASP with imperative Python programming, to query external tools for theorem proving and colimit computation. We exemplify our approach with an example of creativity in mathematics. © Springer International Publishing Switzerland 2015.This work is supported by the 7th Framework Programme for Research of the European Commission funded COINVENT project (FET-Open grant number: 611553). M. Eppe is supported by the German Academic Exchange ServicePeer Reviewe
A SAT Approach to Clique-Width
Clique-width is a graph invariant that has been widely studied in
combinatorics and computer science. However, computing the clique-width of a
graph is an intricate problem, the exact clique-width is not known even for
very small graphs. We present a new method for computing the clique-width of
graphs based on an encoding to propositional satisfiability (SAT) which is then
evaluated by a SAT solver. Our encoding is based on a reformulation of
clique-width in terms of partitions that utilizes an efficient encoding of
cardinality constraints. Our SAT-based method is the first to discover the
exact clique-width of various small graphs, including famous graphs from the
literature as well as random graphs of various density. With our method we
determined the smallest graphs that require a small pre-described clique-width.Comment: proofs in section 3 updated, results remain unchange
- âŠ