1,941 research outputs found
The Ciao clp(FD) library. A modular CLP extension for Prolog
We present a new free library for Constraint Logic Programming over Finite Domains, included with the Ciao Prolog system. The library is entirely written in Prolog, leveraging on Ciao's module system and code transformation capabilities in order to achieve a highly modular design without compromising performance. We describe the interface,
implementation, and design rationale of each modular component. The library meets several design goals: a high level of modularity, allowing the individual components to be replaced by different versions; highefficiency, being competitive with other TT> implementations; a glass-box
approach, so the user can specify new constraints at different levels; and a Prolog implementation, in order to ease the integration with Ciao's code analysis components. The core is built upon two small libraries which implement integer ranges and closures. On top of that, a finite domain
variable datatype is defined, taking care of constraint reexecution depending on range changes. These three libraries form what we call the TT> kernel of the library. This TT> kernel is used in turn to implement several higher-level finite domain constraints, specified using indexicals. Together with a labeling module this layer forms what we name the TT> solver. A final level integrates the CLP (J7©) paradigm with our TT> solver. This is achieved using attributed variables and a compiler from
the CLP (J7©) language to the set of constraints provided by the solver. It should be noted that the user of the library is encouraged to work in any of those levels as seen convenient: from writing a new range module to enriching the set of TT> constraints by writing new indexicals
Indexing the Event Calculus with Kd-trees to Monitor Diabetes
Personal Health Systems (PHS) are mobile solutions tailored to monitoring
patients affected by chronic non communicable diseases. A patient affected by a
chronic disease can generate large amounts of events. Type 1 Diabetic patients
generate several glucose events per day, ranging from at least 6 events per day
(under normal monitoring) to 288 per day when wearing a continuous glucose
monitor (CGM) that samples the blood every 5 minutes for several days. This is
a large number of events to monitor for medical doctors, in particular when
considering that they may have to take decisions concerning adjusting the
treatment, which may impact the life of the patients for a long time. Given the
need to analyse such a large stream of data, doctors need a simple approach
towards physiological time series that allows them to promptly transfer their
knowledge into queries to identify interesting patterns in the data. Achieving
this with current technology is not an easy task, as on one hand it cannot be
expected that medical doctors have the technical knowledge to query databases
and on the other hand these time series include thousands of events, which
requires to re-think the way data is indexed. In order to tackle the knowledge
representation and efficiency problem, this contribution presents the kd-tree
cached event calculus (\ceckd) an event calculus extension for knowledge
engineering of temporal rules capable to handle many thousands events produced
by a diabetic patient. \ceckd\ is built as a support to a graphical interface
to represent monitoring rules for diabetes type 1. In addition, the paper
evaluates the \ceckd\ with respect to the cached event calculus (CEC) to show
how indexing events using kd-trees improves scalability with respect to the
current state of the art.Comment: 24 pages, preliminary results calculated on an implementation of
CECKD, precursor to Journal paper being submitted in 2017, with further
indexing and results possibilities, put here for reference and chronological
purposes to remember how the idea evolve
Simulation and statistical model-checking of logic-based multi-agent system models
This thesis presents SALMA (Simulation and Analysis of Logic-Based Multi-
Agent Models), a new approach for simulation and statistical model checking
of multi-agent system models.
Statistical model checking is a relatively new branch of model-based approximative
verification methods that help to overcome the well-known scalability
problems of exact model checking. In contrast to existing solutions,
SALMA specifies the mechanisms of the simulated system by means of logical
axioms based upon the well-established situation calculus. Leveraging
the resulting first-order logic structure of the system model, the simulation
is coupled with a statistical model-checker that uses a first-order variant of
time-bounded linear temporal logic (LTL) for describing properties. This is
combined with a procedural and process-based language for describing agent
behavior. Together, these parts create a very expressive framework for modeling
and verification that allows direct fine-grained reasoning about the agentsâ
interaction with each other and with their (physical) environment.
SALMA extends the classical situation calculus and linear temporal logic
(LTL) with means to address the specific requirements of multi-agent simulation
models. In particular, cyber-physical domains are considered where
the agents interact with their physical environment. Among other things,
the thesis describes a generic situation calculus axiomatization that encompasses
sensing and information transfer in multi agent systems, for instance
sensor measurements or inter-agent messages. The proposed model explicitly
accounts for real-time constraints and stochastic effects that are inevitable in
cyber-physical systems.
In order to make SALMAâs statistical model checking facilities usable also
for more complex problems, a mechanism for the efficient on-the-fly evaluation
of first-order LTL properties was developed. In particular, the presented algorithm
uses an interval-based representation of the formula evaluation state
together with several other optimization techniques to avoid unnecessary computation.
Altogether, the goal of this thesis was to create an approach for simulation
and statistical model checking of multi-agent systems that builds upon
well-proven logical and statistical foundations, but at the same time takes a
pragmatic software engineering perspective that considers factors like usability,
scalability, and extensibility. In fact, experience gained during several small
to mid-sized experiments that are presented in this thesis suggest that the
SALMA approach seems to be able to live up to these expectations.In dieser Dissertation wird SALMA (Simulation and Analysis of Logic-Based
Multi-Agent Models) vorgestellt, ein im Rahmen dieser Arbeit entwickelter
Ansatz fuÌr die Simulation und die statistische ModellpruÌfung (Model Checking)
von Multiagentensystemen.
Der Begriff âStatistisches Model Checkingâ beschreibt modellbasierte approximative
Verifikationsmethoden, die insbesondere dazu eingesetzt werden
können, um den unvermeidlichen Skalierbarkeitsproblemen von exakten Methoden
zu entgehen. Im Gegensatz zu bisherigen AnsÀtzen werden in SALMA die
Mechanismen des simulierten Systems mithilfe logischer Axiome beschrieben,
die auf dem etablierten SituationskalkuÌl aufbauen. Die dadurch entstehende
prÀdikatenlogische Struktur des Systemmodells wird ausgenutzt um ein Model
Checking Modul zu integrieren, das seinerseits eine prÀdikatenlogische Variante
der linearen temporalen Logik (LTL) verwendet. In Kombination mit
einer prozeduralen und prozessorientierten Sprache fuÌr die Beschreibung von
Agentenverhalten entsteht eine ausdrucksstarke und flexible Plattform fuÌr die
Modellierung und Verifikation von Multiagentensystemen. Sie ermöglicht eine
direkte und feingranulare Beschreibung der Interaktionen sowohl zwischen
Agenten als auch von Agenten mit ihrer (physischen) Umgebung.
SALMA erweitert den klassischen SituationskalkuÌl und die lineare temporale
Logik (LTL) um Elemente und Konzepte, die auf die spezifischen Anforderungen
bei der Simulation und Modellierung von Multiagentensystemen
ausgelegt sind. Insbesondere werden cyber-physische Systeme (CPS) unterstuÌtzt,
in denen Agenten mit ihrer physischen Umgebung interagieren. Unter
anderem wird eine generische, auf dem SituationskalkuÌl basierende, Axiomatisierung
von Prozessen beschrieben, in denen Informationen innerhalb von
Multiagentensystemen transferiert werden â beispielsweise in Form von Sensor-
Messwerten oder Netzwerkpaketen. Dabei werden ausdruÌcklich die unvermeidbaren
stochastischen Effekte und Echtzeitanforderungen in cyber-physischen
Systemen beruÌcksichtigt.
Um statistisches Model Checking mit SALMA auch fuÌr komplexere Problemstellungen
zu ermöglichen, wurde ein Mechanismus fuÌr die effiziente Auswertung
von prÀdikatenlogischen LTL-Formeln entwickelt. Insbesondere beinhaltet
der vorgestellte Algorithmus eine Intervall-basierte ReprÀsentation des
Auswertungszustands, sowie einige andere OptimierungsansÀtze zur Vermeidung
von unnötigen Berechnungsschritten.
Insgesamt war es das Ziel dieser Dissertation, eine Lösung fuÌr Simulation
und statistisches Model Checking zu schaffen, die einerseits auf fundierten
logischen und statistischen Grundlagen aufbaut, auf der anderen Seite jedoch
auch pragmatischen Gesichtspunkten wie Benutzbarkeit oder Erweiterbarkeit
genuÌgt. TatsĂ€chlich legen erste Ergebnisse und Erfahrungen aus
mehreren kleinen bis mittelgroĂen Experimenten nahe, dass SALMA diesen
Zielen gerecht wird
Divided we stand: Parallel distributed stack memory management
We present an overview of the stack-based memory management techniques that we used in our non-deterministic and-parallel Prolog systems: &-Prolog and DASWAM. We believe
that the problems associated with non-deterministic and-parallel systems are more general than those encountered in or-parallel and deterministic and-parallel systems, which can be seen as subsets of this more general case. We develop on the previously proposed "marker scheme", lifting some of the restrictions associated with the selection of goals while keeping (virtual) memory consumption down. We also review some of the other problems associated with the stack-based management scheme, such as handling of forward and backward execution, cut, and roll-backs
Envisioning the qualitative effects of robot manipulation actions using simulation-based projections
Autonomous robots that are to perform complex everyday tasks such as making pancakes have to understand how the effects of an action depend on the way the action is executed. Within Artificial Intelligence, classical planning reasons about whether actions are executable, but makes the assumption that the actions will succeed (with some probability). In this work, we have designed, implemented, and analyzed a framework that allows us to envision the physical effects of robot manipulation actions. We consider envisioning to be a qualitative reasoning method that reasons about actions and their effects based on simulation-based projections. Thereby it allows a robot to infer what could happen when it performs a task in a certain way. This is achieved by translating a qualitative physics problem into a parameterized simulation problem; performing a detailed physics-based simulation of a robot plan; logging the state evolution into appropriate data structures; and then translating these sub-symbolic data structures into interval-based first-order symbolic, qualitative representations, called timelines. The result of the envisioning is a set of detailed narratives represented by timelines which are then used to infer answers to qualitative reasoning problems. By envisioning the outcome of actions before committing to them, a robot is able to reason about physical phenomena and can therefore prevent itself from ending up in unwanted situations. Using this approach, robots can perform manipulation tasks more efficiently, robustly, and flexibly, and they can even successfully accomplish previously unknown variations of tasks
AutoBayes: A System for Generating Data Analysis Programs from Statistical Models
Data analysis is an important scientific task which is required whenever information needs to be extracted from raw data. Statistical approaches to data analysis, which use methods from probability theory and numerical analysis, are well-founded but difficult to implement: the development of a statistical data analysis program for any given application is time-consuming and requires substantial knowledge and experience in several areas. In this paper, we describe AutoBayes, a program synthesis system for the generation of data analysis programs from statistical models. A statistical model specifies the properties for each problem variable (i.e., observation or parameter) and its dependencies in the form of a probability distribution. It is a fully declarative problem description, similar in spirit to a set of differential equations. From such a model, AutoBayes generates optimized and fully commented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Code is produced by a schema-guided deductive synthesis process. A schema consists of a code template and applicability constraints which are checked against the model during synthesis using theorem proving technology. AutoBayes augments schema-guided synthesis by symbolic-algebraic computation and can thus derive closed-form solutions for many problems. It is well-suited for tasks like estimating best-fitting model parameters for the given data. Here, we describe AutoBayes's system architecture, in particular the schema-guided synthesis kernel. Its capabilities are illustrated by a number of advanced textbook examples and benchmarks
Programming Languages for Distributed Computing Systems
When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less satisfactory. Researchers all over the world began designing new programming languages specifically for implementing distributed applications. These languages and their history, their underlying principles, their design, and their use are the subject of this paper. We begin by giving our view of what a distributed system is, illustrating with examples to avoid confusion on this important and controversial point. We then describe the three main characteristics that distinguish distributed programming languages from traditional sequential languages, namely, how they deal with parallelism, communication, and partial failures. Finally, we discuss 15 representative distributed languages to give the flavor of each. These examples include languages based on message passing, rendezvous, remote procedure call, objects, and atomic transactions, as well as functional languages, logic languages, and distributed data structure languages. The paper concludes with a comprehensive bibliography listing over 200 papers on nearly 100 distributed programming languages
Knowledge-Based Systems. Overview and Selected Examples
The Advanced Computer Applications (ACA) project builds on IIASA's traditional strength in the methodological foundations of operations research and applied systems analysis, and its rich experience in numerous application areas including the environment, technology and risk. The ACA group draws on this infrastructure and combines it with elements of AI and advanced information and computer technology to create expert systems that have practical applications.
By emphasizing a directly understandable problem representation, based on symbolic simulation and dynamic color graphics, and the user interface as a key element of interactive decision support systems, models of complex processes are made understandable and available to non-technical users.
Several completely externally-funded research and development projects in the field of model-based decision support and applied Artificial Intelligence (AI) are currently under way, e.g., "Expert Systems for Integrated Development: A Case Study of Shanxi Province, The People's Republic of China."
This paper gives an overview of some of the expert systems that have been considered, compared or assessed during the course of our research, and a brief introduction to some of our related in-house research topics
- âŠ