5,020 research outputs found

    On Relaxing Metric Information in Linear Temporal Logic

    Full text link
    Metric LTL formulas rely on the next operator to encode time distances, whereas qualitative LTL formulas use only the until operator. This paper shows how to transform any metric LTL formula M into a qualitative formula Q, such that Q is satisfiable if and only if M is satisfiable over words with variability bounded with respect to the largest distances used in M (i.e., occurrences of next), but the size of Q is independent of such distances. Besides the theoretical interest, this result can help simplify the verification of systems with time-granularity heterogeneity, where large distances are required to express the coarse-grain dynamics in terms of fine-grain time units.Comment: Minor change

    On neighbourhood singleton-style consistencies for qualitative spatial and temporal reasoning

    Get PDF
    Given a qualitative constraint network (QCN), a singleton-style consistency focuses on each base relation (atom) of a constraint separately, rather than the entire constraint altogether. This local consistency is essential for tackling fundamental reasoning problems associated with QCNs, such as minimal labeling, but can suffer from redundant constraint checks, especially when checks occur far from where the pruning usually takes place. In this paper, we propose singleton-style consistencies that are applied just on the neighbourhood of a singleton-checked constraint instead of the whole network. We make a theoretical comparison with existing consistencies and consequently prove some properties of the new ones. Further, we propose algorithms to enforce our consistencies, as well as parsimonious variants thereof, that are more efficient in practice than the state of the art. An experimental evaluation with random and structured QCNs of Allen's Interval Algebra in the phase transition region demonstrates the potential of our approach.acceptedVersionPeer reviewe

    Hybrid Bayesian Networks Using Mixtures of Truncated Basis Functions

    Get PDF
    This paper introduces MoTBFs, an R package for manipulating mixtures of truncated basis functions. This class of functions allows the representation of joint probability distributions involving discrete and continuous variables simultaneously, and includes mixtures of truncated exponentials and mixtures of polynomials as special cases. The package implements functions for learning the parameters of univariate, multivariate, and conditional distributions, and provides support for parameter learning in Bayesian networks with both discrete and continuous variables. Probabilistic inference using forward sampling is also implemented. Part of the functionality of the MoTBFs package relies on the bnlearn package, which includes functions for learning the structure of a Bayesian network from a data set. Leveraging this functionality, the MoTBFs package supports learning of MoTBF-based Bayesian networks over hybrid domains. We give a brief introduction to the methodological context and algorithms implemented in the package. An extensive illustrative example is used to describe the package, its functionality, and its usage

    On redundancy in simple temporal networks

    Full text link
    © 2016 The Authors and IOS Press. The Simple Temporal Problem (STP) has been widely used in various applications to schedule tasks. For dynamical systems, scheduling needs to be efficient and flexible to handle uncertainty and perturbation. To this end, modern approaches usually encode the temporal information as an STP instance. This representation contains redundant information, which can not only take a significant amount of storage space, but also make scheduling inefficient due to the non-concise representation. In this paper, we investigate the problem of simplifying an STP instance by removing redundant information. We show that such a simplification can result in a unique minimal representation without loss of temporal information, and present an efficient algorithm to achieve this task. Evaluation on a large benchmark dataset of STP exhibits a significant reduction in redundant information for the involved instances

    Efficient Maximum A-Posteriori Inference in Markov Logic and Application in Description Logics

    Full text link
    Maximum a-posteriori (MAP) query in statistical relational models computes the most probable world given evidence and further knowledge about the domain. It is arguably one of the most important types of computational problems, since it is also used as a subroutine in weight learning algorithms. In this thesis, we discuss an improved inference algorithm and an application for MAP queries. We focus on Markov logic (ML) as statistical relational formalism. Markov logic combines Markov networks with first-order logic by attaching weights to first-order formulas. For inference, we improve existing work which translates MAP queries to integer linear programs (ILP). The motivation is that existing ILP solvers are very stable and fast and are able to precisely estimate the quality of an intermediate solution. In our work, we focus on improving the translation process such that we result in ILPs having fewer variables and fewer constraints. Our main contribution is the Cutting Plane Aggregation (CPA) approach which leverages symmetries in ML networks and parallelizes MAP inference. Additionally, we integrate the cutting plane inference (Riedel 2008) algorithm which significantly reduces the number of groundings by solving multiple smaller ILPs instead of one large ILP. We present the new Markov logic engine RockIt which outperforms state-of-the-art engines in standard Markov logic benchmarks. Afterwards, we apply the MAP query to description logics. Description logics (DL) are knowledge representation formalisms whose expressivity is higher than propositional logic but lower than first-order logic. The most popular DLs have been standardized in the ontology language OWL and are an elementary component in the Semantic Web. We combine Markov logic, which essentially follows the semantic of a log-linear model, with description logics to log-linear description logics. In log-linear description logic weights can be attached to any description logic axiom. Furthermore, we introduce a new query type which computes the most-probable 'coherent' world. Possible applications of log-linear description logics are mainly located in the area of ontology learning and data integration. With our novel log-linear description logic reasoner ELog, we experimentally show that more expressivity increases quality and that the solutions of optimal solving strategies have higher quality than the solutions of approximate solving strategies
    • …
    corecore