7 research outputs found

    BV and Pomset Logic Are Not the Same

    Get PDF
    BV and pomset logic are two logics that both conservatively extend unit-free multiplicative linear logic by a third binary connective, which (i) is non-commutative, (ii) is self-dual, and (iii) lies between the "par" and the "tensor". It was conjectured early on (more than 20 years ago), that these two logics, that share the same language, that both admit cut elimination, and whose connectives have essentially the same properties, are in fact the same. In this paper we show that this is not the case. We present a formula that is provable in pomset logic but not in BV

    New Minimal Linear Inferences in Boolean Logic Independent of Switch and Medial

    Get PDF
    A linear inference is a valid inequality of Boolean algebra in which each variable occurs at most once on each side. Equivalently, it is a linear rewrite rule on Boolean terms that constitutes a valid implication. Linear inferences have played a significant role in structural proof theory, in particular in models of substructural logics and in normalisation arguments for deep inference proof systems. Systems of linear logic and, later, deep inference are founded upon two particular linear inferences, switch : x ? (y ? z) ? (x ? y) ? z, and medial : (w ? x) ? (y ? z) ? (w ? y) ? (x ? z). It is well-known that these two are not enough to derive all linear inferences (even modulo all valid linear equations), but beyond this little more is known about the structure of linear inferences in general. In particular despite recurring attention in the literature, the smallest linear inference not derivable under switch and medial ("switch-medial-independent") was not previously known. In this work we leverage recently developed graphical representations of linear formulae to build an implementation that is capable of more efficiently searching for switch-medial-independent inferences. We use it to find two "minimal" 8-variable independent inferences and also prove that no smaller ones exist; in contrast, a previous approach based directly on formulae reached computational limits already at 7 variables. One of these new inferences derives some previously found independent linear inferences. The other exhibits structure seemingly beyond the scope of previous approaches we are aware of; in particular, its existence contradicts a conjecture of Das and Strassburger

    A decidable temporal DL-Lite logic with undecidable first-order and datalog-rewritability of ontology-mediated atomic queries

    Get PDF
    We design a logic in the temporal DL-Lite family (with non-Horn role inclusions and restricted temporalised roles), for which answering ontology-mediated atomic queries (OMAQs) can be done in ExpSpace and even in PSpace for ontologies without existential quantification in the rule heads but determining FO-rewritability or (linear) Datalog-rewritability of OMAQs is undecidable. On the other hand, we show (by reduction to monadic disjunctive Datalog) that deciding FO-rewritability of OMAQs in the non-temporal fragment of our logic can be done in 3NExpTime

    Predictive Runtime Verification of Stochastic Systems

    Get PDF
    Runtime Verification (RV) is the formal analysis of the execution of a system against some properties at runtime. RV is particularly useful for stochastic systems that have a non-zero probability of failure at runtime. The standard RV assumes constructing a monitor that checks only the currently observed execution of the system against the given properties. This dissertation proposes a framework for predictive RV, where the monitor instead checks the current execution with its finite extensions against some property. The extensions are generated using a prediction model, that is built based on execution samples randomly generated from the system. The thesis statement is that predictive RV for stochastic systems is feasible, effective, and useful. The feasibility is demonstrated by providing a framework, called Prevent, that builds a predictive monitor by using trained prediction models to finitely extend an execution path, and computing the probabilities of the extensions that satisfy or violate the given property. The prediction model is trained using statistical learning techniques from independent and identically distributed samples of system executions. The prediction is the result of a quantitative bounded reachability analysis on the product of the prediction model and the automaton specifying the property. The analysis results are computed offline and stored in a lookup table. At runtime the monitor obtains the state of the system on the prediction model based on the observed execution, directly or by approximation, and uses the lookup table to retrieve the computed probability that the system at the current state will satisfy or violate the given property within some finite number of steps. The effectiveness of Prevent is shown by applying abstraction when constructing the prediction model. The abstraction is on the observation space based on extracting the symmetry relation between symbols that have similar probabilities to satisfy a property. The abstraction may introduce nondeterminism in the final model, which is handled by using a hidden state variable when building the prediction model. We also demonstrate that, under the convergence conditions of the learning algorithms, the prediction results from the abstract models are the same as the concrete models. Finally, the usefulness of Prevent is indicated in real-world applications by showing how it can be applied for predicting rare properties, properties with very low but non-zero probability of satisfaction. More specifically, we adjust the training algorithm that uses the samples generated by importance sampling to generate the prediction models for rare properties without increasing the number of samples and without having a negative impact on the prediction accuracy

    Abduction and Anonymity in Data Mining

    Get PDF
    This thesis investigates two new research problems that arise in modern data mining: reasoning on data mining results, and privacy implication of data mining results. Most of the data mining algorithms rely on inductive techniques, trying to infer information that is generalized from the input data. But very often this inductive step on raw data is not enough to answer the user questions, and there is the need to process data again using other inference methods. In order to answer high level user needs such as explanation of results, we describe an environment able to perform abductive (hypothetical) reasoning, since often the solutions of such queries can be seen as the set of hypothesis that satisfy some requirements. By using cost-based abduction, we show how classification algorithms can be boosted by performing abductive reasoning over the data mining results, improving the quality of the output. Another growing research area in data mining is the one of privacy-preserving data mining. Due to the availability of large amounts of data, easily collected and stored via computer systems, new applications are emerging, but unfortunately privacy concerns make data mining unsuitable. We study the privacy implications of data mining in a mathematical and logical context, focusing on the anonymity of people whose data are analyzed. A formal theory on anonymity preserving data mining is given, together with a number of anonymity-preserving algorithms for pattern mining. The post-processing improvement on data mining results (w.r.t. utility and privacy) is the central focus of the problems we investigated in this thesis
    corecore