249 research outputs found

    CHR(PRISM)-based Probabilistic Logic Learning

    Full text link
    PRISM is an extension of Prolog with probabilistic predicates and built-in support for expectation-maximization learning. Constraint Handling Rules (CHR) is a high-level programming language based on multi-headed multiset rewrite rules. In this paper, we introduce a new probabilistic logic formalism, called CHRiSM, based on a combination of CHR and PRISM. It can be used for high-level rapid prototyping of complex statistical models by means of "chance rules". The underlying PRISM system can then be used for several probabilistic inference tasks, including probability computation and parameter learning. We define the CHRiSM language in terms of syntax and operational semantics, and illustrate it with examples. We define the notion of ambiguous programs and define a distribution semantics for unambiguous programs. Next, we describe an implementation of CHRiSM, based on CHR(PRISM). We discuss the relation between CHRiSM and other probabilistic logic programming languages, in particular PCHR. Finally we identify potential application domains

    Inference in Probabilistic Logic Programs with Continuous Random Variables

    Full text link
    Probabilistic Logic Programming (PLP), exemplified by Sato and Kameya's PRISM, Poole's ICL, Raedt et al's ProbLog and Vennekens et al's LPAD, is aimed at combining statistical and logical knowledge representation and inference. A key characteristic of PLP frameworks is that they are conservative extensions to non-probabilistic logic programs which have been widely used for knowledge representation. PLP frameworks extend traditional logic programming semantics to a distribution semantics, where the semantics of a probabilistic logic program is given in terms of a distribution over possible models of the program. However, the inference techniques used in these works rely on enumerating sets of explanations for a query answer. Consequently, these languages permit very limited use of random variables with continuous distributions. In this paper, we present a symbolic inference procedure that uses constraints and represents sets of explanations without enumeration. This permits us to reason over PLPs with Gaussian or Gamma-distributed random variables (in addition to discrete-valued random variables) and linear equality constraints over reals. We develop the inference procedure in the context of PRISM; however the procedure's core ideas can be easily applied to other PLP languages as well. An interesting aspect of our inference procedure is that PRISM's query evaluation process becomes a special case in the absence of any continuous random variables in the program. The symbolic inference procedure enables us to reason over complex probabilistic models such as Kalman filters and a large subclass of Hybrid Bayesian networks that were hitherto not possible in PLP frameworks. (To appear in Theory and Practice of Logic Programming).Comment: 12 pages. arXiv admin note: substantial text overlap with arXiv:1203.428

    Tabling and Answer Subsumption for Reasoning on Logic Programs with Annotated Disjunctions

    Get PDF
    Abstract Probabilistic Logic Programming is an active field of research, with many proposals for languages, semantics and reasoning algorithms. One such proposal, Logic Programming with Annotated Disjunctions (LPADs) represents probabilistic information in a sound and simple way. This paper presents the algorithm "Probabilistic Inference with Tabling and Answer subsumption" (PITA) for computing the probability of queries. Answer subsumption is a feature of tabling that allows the combination of different answers for the same subgoal in the case in which a partial order can be defined over them. We have applied it in our case since probabilistic explanations (stored as BDDs in PITA) possess a natural lattice structure. PITA has been implemented in XSB and compared with ProbLog, cplint and CVE. The results show that, in almost all cases, PITA is able to solve larger problems and is faster than competing algorithms

    Learning Visual Patterns: Imposing Order on Objects, Trajectories and Networks

    Get PDF
    Fundamental to many tasks in the field of computer vision, this work considers the understanding of observed visual patterns in static images and dynamic scenes . Within this broad domain, we focus on three particular subtasks, contributing novel solutions to: (a) the subordinate categorization of objects (avian species specifically), (b) the analysis of multi-agent interactions using the agent trajectories, and (c) the estimation of camera network topology. In contrast to object recognition, where the presence or absence of certain parts is generally indicative of basic-level category, the problem of subordinate categorization rests on the ability to establish salient distinctions amongst the characteristics of those parts which comprise the basic-level category. Focusing on an avian domain due to the fine-grained structure of the category taxonomy, we explore a pose-normalized appearance model based on a volumetric poselet scheme. The variation in shape and appearance properties of these parts across a taxonomy provides the cues needed for subordinate categorization. Our model associates the underlying image pattern parameters used for detection with corresponding volumetric part location, scale and orientation parameters. These parameters implicitly define a mapping from the image pixels into a pose-normalized appearance space, removing view and pose dependencies, facilitating fine-grained categorization with relatively few training examples. We next examine the problem of leveraging trajectories to understand interactions in dynamic multi-agent environments. We focus on perceptual tasks, those for which an agent's behavior is governed largely by the individuals and objects around them. We introduce kinetic accessibility, a model for evaluating the perceived, and thus anticipated, movements of other agents. This new model is then applied to the analysis of basketball footage. The kinetic accessibility measures are coupled with low-level visual cues and domain-specific knowledge for determining which player has possession of the ball and for recognizing events such as passes, shots and turnovers. Finally, we present two differing approaches for estimating camera network topology. The first technique seeks to partition a set of observations made in the camera network into individual object trajectories. As exhaustive consideration of the partition space is intractable, partitions are considered incrementally, adding observations while pruning unlikely partitions. Partition likelihood is determined by the evaluation of a probabilistic graphical model, balancing the consistency of appearances across a hypothesized trajectory with the latest predictions of camera adjacency. A primarily benefit of estimating object trajectories is that higher-order statistics, as opposed to just first-order adjacency, can be derived, yielding resilience to camera failure and the potential for improved tracking performance between cameras. Unlike the former centralized technique, the latter takes a decentralized approach, estimating the global network topology with local computations using sequential Bayesian estimation on a modified multinomial distribution. Key to this method is an information-theoretic appearance model for observation weighting. The inherently distributed nature of the approach allows the simultaneous utilization of all sensors as processing agents in collectively recovering the network topology

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Mapping the intuitive investigation: Seeking, evaluating and explaining the evidence

    Get PDF
    The human mind has developed numerous cognitive tools to allow us to navigate the uncertainty of the world and make sense of situations and events. In this thesis I present a descriptive account of some of these tools by probing people’s ability to: evaluate, seek, and explain evidence and information. This was achieved by appraising people’s behaviour in controlled experiments – predominantly representing legal-investigative scenarios – utilising normative causal models (e.g., causal Bayesian networks), and uncovering the alternative strategies that people employed when reasoning under uncertainty. In Chapter 4, I investigate people’s ability to engage in a pattern of reasoning termed ‘explaining away’ and propose, and find empirical support towards, intuitive theories that address why the observed inference errors were made. In Chapter 5, I outline how people search for, and evaluate, evidence in a sequential investigative information-seeking paradigm – finding that people do not seek information simply to maximize a given utility function but rather are driven by additional strategies which are sensitive to factors such as demands of the task and a novel form of risk aversion. I extend these findings to forensic professionals, and utilise a naturalistic study employing mobile eye-trackers during a mock crime scene investigation to elucidate the key role that ‘asking the right questions’ plays when engaging in sense-making practices ‘in the wild’. In Chapter 6, I explore people’s preferences for certain types of information relating to opportunity and motive at various stages of the legal-investigative process. Here, I demonstrate that people prefer ‘motive’ accounts of crimes (analogous to a teleology preference) at different stages of the investigative process. In an additional two studies I demonstrate that these preferences are context-sensitive: namely, that ‘motive’ information tends to be moreincriminating and less exculpatory. In a final set of experiments, outlined in Chapter 7, I investigate how drawing causal models of competing explanations of the evidence affects how these same explanations are evaluated – arguing that graphically representing the evidence bolsters people’s understanding of the probabilistic and logical significance of the causal structures drawn. In sum, this thesis provides a rich descriptive account of how people engage in various aspects of sense-making and decision-making under uncertainty. The work presented in this thesis ultimately aims to increase the ecological and descriptive validity of normative causal frameworks utilised in the cognitive sciences – whilst informing ways to formalise decision-making practices in real-world specialised domains

    Individual Verifiability for E-Voting, From Formal Verification To Machine Learning

    Get PDF
    The cornerstone of secure electronic voting protocols lies in the principle of individual verifiability. This thesis delves into the intricate task of harmonizing this principle with two other crucial aspects: ballot privacy and coercion-resistance. In the realm of electronic voting, individual verifiability serves as a critical safeguard. It empowers each voter with the ability to confirm that their vote has been accurately recorded and counted in the final tally. This thesis explores the intricate balance between this pivotal aspect of electronic voting and the equally important facets of ballot privacy and coercion-resistance. Ballot privacy, or the assurance that a voter's choice remains confidential, is a fundamental right in democratic processes. It ensures that voters can express their political preferences without fear of retribution or discrimination. On the other hand, coercion-resistance refers to the system's resilience against attempts to influence or manipulate a voter's choice. Furthermore, this thesis also ventures into an empirical analysis of the effectiveness of individual voter checks in ensuring a correct election outcome. It considers a scenario where an adversary possesses additional knowledge about the individual voters and can strategically decide which voters to target. The study aims to estimate the degree to which these checks can still guarantee the accuracy of the election results under such circumstances. In essence, this thesis embarks on a comprehensive exploration of the dynamics between individual verifiability, ballot privacy, and coercion-resistance in secure electronic voting protocols. It also seeks to quantify the effectiveness of individual voter checks in maintaining the integrity of election outcomes, particularly when faced with a knowledgeable and capable adversary. The first contribution of this thesis is revisiting the seminal coercion-resistant e-voting protocol by Juels, Catalano, and Jakobsson (JCJ), examining its usability and practicality. It discusses the credential handling system proposed by Neumann et al., which uses a smart card to unlock or fake credentials via a PIN code. The thesis identifies several security concerns with the JCJ protocol, including an attack on coercion-resistance due to information leakage from the removal of duplicate ballots. It also addresses the issues of PIN errors and the single point of failure associated with the smart card. To mitigate these vulnerabilities, we propose hardware-flexible protocols that allow credentials to be stored by ordinary means while still being PIN-based and providing PIN error resilience. One of these protocols features a linear tally complexity, ensuring efficiency and scalability for large-scale electronic voting systems. The second contribution of this thesis pertains to the exploration and validation of the ballot privacy definition proposed by Cortier et. al., particularly in the context of an adversarial presence. Our exploration involves both the Selene and the MiniVoting abstract scheme. We apply Cortier's definition of ballot privacy to this scheme, investigating how it holds up under this framework. To ensure the validity of our findings, we employ the use of tools for machine-checked proof. This method provides a rigorous and reliable means of verifying our results, ensuring that our conclusions are both accurate and trustworthy. The final contribution of this thesis is a detailed examination and analysis of the Estonian election results. This analysis is conducted in several phases, each contributing to a comprehensive understanding of the election process. The first phase involves a comprehensive marginal analysis of the Estonian election results. We compute upper bounds for several margins, providing a detailed statistical overview of the election outcome. This analysis allows us to identify key trends and patterns in the voting data, laying the groundwork for the subsequent phase of our research. We then train multiple binary classifiers to predict whether a voter is likely to verify their vote. This predictive modeling enables an adversary to gain insights into voter behavior and the factors that may influence their decision to verify their vote. With the insights gained from the previous phases, an adversarial classification algorithm for verifying voters is trained. The likelihood of such an adversary is calculated using various machine learning models, providing a more robust assessment of potential threats to the election process
    • …
    corecore