6,185 research outputs found

    Hidden-Markov Program Algebra with iteration

    Full text link
    We use Hidden Markov Models to motivate a quantitative compositional semantics for noninterference-based security with iteration, including a refinement- or "implements" relation that compares two programs with respect to their information leakage; and we propose a program algebra for source-level reasoning about such programs, in particular as a means of establishing that an "implementation" program leaks no more than its "specification" program. This joins two themes: we extend our earlier work, having iteration but only qualitative, by making it quantitative; and we extend our earlier quantitative work by including iteration. We advocate stepwise refinement and source-level program algebra, both as conceptual reasoning tools and as targets for automated assistance. A selection of algebraic laws is given to support this view in the case of quantitative noninterference; and it is demonstrated on a simple iterated password-guessing attack

    Hidden Markov Models and their Application for Predicting Failure Events

    Full text link
    We show how Markov mixed membership models (MMMM) can be used to predict the degradation of assets. We model the degradation path of individual assets, to predict overall failure rates. Instead of a separate distribution for each hidden state, we use hierarchical mixtures of distributions in the exponential family. In our approach the observation distribution of the states is a finite mixture distribution of a small set of (simpler) distributions shared across all states. Using tied-mixture observation distributions offers several advantages. The mixtures act as a regularization for typically very sparse problems, and they reduce the computational effort for the learning algorithm since there are fewer distributions to be found. Using shared mixtures enables sharing of statistical strength between the Markov states and thus transfer learning. We determine for individual assets the trade-off between the risk of failure and extended operating hours by combining a MMMM with a partially observable Markov decision process (POMDP) to dynamically optimize the policy for when and how to maintain the asset.Comment: Will be published in the proceedings of ICCS 2020; @Booklet{EasyChair:3183, author = {Paul Hofmann and Zaid Tashman}, title = {Hidden Markov Models and their Application for Predicting Failure Events}, howpublished = {EasyChair Preprint no. 3183}, year = {EasyChair, 2020}

    Artificial Intelligence in the Context of Human Consciousness

    Get PDF
    Artificial intelligence (AI) can be defined as the ability of a machine to learn and make decisions based on acquired information. AI’s development has incited rampant public speculation regarding the singularity theory: a futuristic phase in which intelligent machines are capable of creating increasingly intelligent systems. Its implications, combined with the close relationship between humanity and their machines, make achieving understanding both natural and artificial intelligence imperative. Researchers are continuing to discover natural processes responsible for essential human skills like decision-making, understanding language, and performing multiple processes simultaneously. Artificial intelligence attempts to simulate these functions through techniques like artificial neural networks, Markov Decision Processes, Human Language Technology, and Multi-Agent Systems, which rely upon a combination of mathematical models and hardware

    Credal Networks under Epistemic Irrelevance

    Get PDF
    A credal network under epistemic irrelevance is a generalised type of Bayesian network that relaxes its two main building blocks. On the one hand, the local probabilities are allowed to be partially specified. On the other hand, the assessments of independence do not have to hold exactly. Conceptually, these two features turn credal networks under epistemic irrelevance into a powerful alternative to Bayesian networks, offering a more flexible approach to graph-based multivariate uncertainty modelling. However, in practice, they have long been perceived as very hard to work with, both theoretically and computationally. The aim of this paper is to demonstrate that this perception is no longer justified. We provide a general introduction to credal networks under epistemic irrelevance, give an overview of the state of the art, and present several new theoretical results. Most importantly, we explain how these results can be combined to allow for the design of recursive inference methods. We provide numerous concrete examples of how this can be achieved, and use these to demonstrate that computing with credal networks under epistemic irrelevance is most definitely feasible, and in some cases even highly efficient. We also discuss several philosophical aspects, including the lack of symmetry, how to deal with probability zero, the interpretation of lower expectations, the axiomatic status of graphoid properties, and the difference between updating and conditioning
    • …
    corecore