312 research outputs found
Exploiting the Hierarchical Structure of Rule-Based Specifications for Decision Planning
Rule-based specifications have been very successful as a declarative approach in many domains, due to the handy yet solid foundations offered by rule-based machineries like term and graph rewriting. Realistic problems, however, call for suitable techniques to guarantee scalability. For instance, many domains exhibit a hierarchical structure that can be exploited conveniently. This is particularly evident for composition associations of models. We propose an explicit representation of such structured models and a methodology that exploits it for the description and analysis of model- and rule-based systems. The approach is presented in the framework of rewriting logic and its efficient implementation in the rewrite engine Maude and is illustrated with a case study.
Service discovery and negotiation with COWS
To provide formal foundations to current (web) services technologies, we put forward using COWS, a process calculus for specifying, combining and analysing services, as a uniform formalism for modelling all the relevant phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, deployment and execution. In this paper, we show that constraints and operations on them can be smoothly incorporated in COWS, and propose a disciplined way to model multisets of constraints and to manipulate them through appropriate interaction protocols. Therefore, we demonstrate that also QoS requirement specifications and SLA achievements, and the phases of dynamic service discovery and negotiation can be comfortably modelled in COWS. We illustrate our approach through a scenario for a service-based web hosting provider
Probabilistic Programming Concepts
A multitude of different probabilistic programming languages exists today,
all extending a traditional programming language with primitives to support
modeling of complex, structured probability distributions. Each of these
languages employs its own probabilistic primitives, and comes with a particular
syntax, semantics and inference procedure. This makes it hard to understand the
underlying programming concepts and appreciate the differences between the
different languages. To obtain a better understanding of probabilistic
programming, we identify a number of core programming concepts underlying the
primitives used by various probabilistic languages, discuss the execution
mechanisms that they require and use these to position state-of-the-art
probabilistic languages and their implementation. While doing so, we focus on
probabilistic extensions of logic programming languages such as Prolog, which
have been developed since more than 20 years
A Partial Taxonomy of Substitutability and Interchangeability
Substitutability, interchangeability and related concepts in Constraint
Programming were introduced approximately twenty years ago and have given rise
to considerable subsequent research. We survey this work, classify, and relate
the different concepts, and indicate directions for future work, in particular
with respect to making connections with research into symmetry breaking. This
paper is a condensed version of a larger work in progress.Comment: 18 pages, The 10th International Workshop on Symmetry in Constraint
Satisfaction Problems (SymCon'10
Saggitarius: A DSL for Specifying Grammatical Domains
Common data types like dates, addresses, phone numbers and tables can have
multiple textual representations, and many heavily-used languages, such as SQL,
come in several dialects. These variations can cause data to be misinterpreted,
leading to silent data corruption, failure of data processing systems, or even
security vulnerabilities. Saggitarius is a new language and system designed to
help programmers reason about the format of data, by describing grammatical
domains -- that is, sets of context-free grammars that describe the many
possible representations of a datatype. We describe the design of Saggitarius
via example and provide a relational semantics. We show how Saggitarius may be
used to analyze a data set: given example data, it uses an algorithm based on
semi-ring parsing and MaxSAT to infer which grammar in a given domain best
matches that data. We evaluate the effectiveness of the algorithm on a
benchmark suite of 110 example problems, and we demonstrate that our system
typically returns a satisfying grammar within a few seconds with only a small
number of examples. We also delve deeper into a more extensive case study on
using Saggitarius for CSV dialect detection. Despite being general-purpose, we
find that Saggitarius offers comparable results to hand-tuned, specialized
tools; in the case of CSV, it infers grammars for 84% of benchmarks within 60
seconds, and has comparable accuracy to custom-built dialect detection tools.Comment: OOPSLA 202
- …