836 research outputs found

    First-order logic learning in artificial neural networks

    Get PDF
    Artificial Neural Networks have previously been applied in neuro-symbolic learning to learn ground logic program rules. However, there are few results of learning relations using neuro-symbolic learning. This paper presents the system PAN, which can learn relations. The inputs to PAN are one or more atoms, representing the conditions of a logic rule, and the output is the conclusion of the rule. The symbolic inputs may include functional terms of arbitrary depth and arity, and the output may include terms constructed from the input functors. Symbolic inputs are encoded as an integer using an invertible encoding function, which is used in reverse to extract the output terms. The main advance of this system is a convention to allow construction of Artificial Neural Networks able to learn rules with the same power of expression as first order definite clauses. The system is tested on three examples and the results are discussed

    Probabilistic abductive logic programming using Dirichlet priors

    Get PDF
    Probabilistic programming is an area of research that aims to develop general inference algorithms for probabilistic models expressed as probabilistic programs whose execution corresponds to inferring the parameters of those models. In this paper, we introduce a probabilistic programming language (PPL) based on abductive logic programming for performing inference in probabilistic models involving categorical distributions with Dirichlet priors. We encode these models as abductive logic programs enriched with probabilistic definitions and queries, and show how to execute and compile them to boolean formulas. Using the latter, we perform generalized inference using one of two proposed Markov Chain Monte Carlo (MCMC) sampling algorithms: an adaptation of uncollapsed Gibbs sampling from related work and a novel collapsed Gibbs sampling (CGS). We show that CGS converges faster than the uncollapsed version on a latent Dirichlet allocation (LDA) task using synthetic data. On similar data, we compare our PPL with LDA-specific algorithms and other PPLs. We find that all methods, except one, perform similarly and that the more expressive the PPL, the slower it is. We illustrate applications of our PPL on real data in two variants of LDA models (Seed and Cluster LDA), and in the repeated insertion model (RIM). In the latter, our PPL yields similar conclusions to inference with EM for Mallows models

    Designing effective policies for minimal agents

    Get PDF
    A policy for a minimal reactive agent is a set of condition-action rules used to determine its response to perceived environmental stimuli. When the policy pre-disposes the agent to achieving a stipulated goal we call it a teleo-reactive policy. This paper presents a framework for constructing and evaluating teleo-reactive policies for one or more minimal agents, based upon discounted-reward evaluation of policy-restricted subgraphs of complete situation-graphs. The main feature of the method is that it exploits explicit and definite associations of the agent’s perceptions with states. The combinatorial burden that would potentially ensue from such associations can be ameliorated by suitable use of abstractions. The framework allows one to plan for a number of agents by focusing upon the behaviour of a single representative of them. It allows for varied behaviour to be modelled, including communication between agents. Simulation results presented here indicate that the method affords a good degree of scalability and predictive power

    Optimizing minimal agents through abstraction

    Get PDF
    Abstraction is a valuable tool for dealing with scalability in large state space contexts. This paper addresses the design, using abstraction, of good policies for minimal autonomous agents applied within a situation-graph-framework. In this framework an agent’s policy is some function that maps perceptual inputs to actions deterministically. A good policy disposes the agent towards achieving one or more designated goal situations, and the design process aims to identify such policies. The agents to which the framework applies are assumed to have only partial observability, and in particular may not be able to perceive fully a goal situation. A further assumption is that the environment may influence an agent’s situation by unpredictable exogenous events, so that a policy cannot take advantage, of a reliable history of previous actions. The Bellman discount measure provides a means of evaluating situations and hence the overall value of a policy. When abstraction is used, the accuracy of the method can be significantly improved by modifying the standard Bellman equations. This paper describes the modification and demonstrates its power through comparison with simulation results

    The complexity and generality of learning answer set programs

    No full text
    Traditionally most of the work in the field of Inductive Logic Programming (ILP) has addressed the problem of learning Prolog programs. On the other hand, Answer Set Programming is increasingly being used as a powerful language for knowledge representation and reasoning, and is also gaining increasing attention in industry. Consequently, the research activity in ILP has widened to the area of Answer Set Programming, witnessing the proposal of several new learning frameworks that have extended ILP to learning answer set programs. In this paper, we investigate the theoretical properties of these existing frameworks for learning programs under the answer set semantics. Specifically, we present a detailed analysis of the computational complexity of each of these frameworks with respect to the two decision problems of deciding whether a hypothesis is a solution of a learning task and deciding whether a learning task has any solutions. We introduce a new notion of generality of a learning framework, which enables us to define a framework to be more general than another in terms of being able to distinguish one ASP hypothesis solution from a set of incorrect ASP programs. Based on this notion, we formally prove a generality relation over the set of existing frameworks for learning programs under answer set semantics. In particular, we show that our recently proposed framework, Context-dependent Learning from Ordered Answer Sets, is more general than brave induction, induction of stable models, and cautious induction, and maintains the same complexity as cautious induction, which has the highest complexity of these frameworks

    Labelled Natural Deduction for Substructural Logics

    No full text
    In this paper a uniform methodology to perform Natural Deduction over the family of linear, relevance and intuitionistic logics is proposed. The methodology follows the Labelled Deductive Systems (LDS) discipline, where the deductive process manipulates declarative units { formulas labelled according to a labelling algebra. In the system de-scribed here, labels are either ground terms or variables of a given labelling language and inference rules manipulate formulas and labels simultaneously, generating (whenever necessary) constraints on the labels used in the rules. A set of natural deduction style inference rules is given, and the notion of a derivation is dened which associates a la-belled natural deduction style \structural derivation " with a set of generated constraints. Algorithmic procedures, based on a technique called resource abduction, are dened to solve the constraints generated within a derivation, and their termination conditions dis-cussed. A natural deduction derivation is correct with respect to a given substructural logic, if, under the condition that the algorithmic procedures terminate, the associated set of constraints is satised with respect to the underlying labelling algebra. This is shown by proving that the natural deduction system is sound and complete with respect to the LKE tableaux system [DG94].

    Transparent modelling of finite stochastic processes for multiple agents

    Get PDF
    Stochastic Processes are ubiquitous, from automated engineering, through financial markets, to space exploration. These systems are typically highly dynamic, unpredictable and resistant to analytic methods; coupled with a need to orchestrate long control sequences which are both highly complex and uncertain. This report examines some existing single- and multi-agent modelling frameworks, details their strengths and weaknesses, and uses the experience to identify some fundamental tenets of good practice in modelling stochastic processes. It goes on to develop a new family of frameworks based on these tenets, which can model single- and multi-agent domains with equal clarity and flexibility, while remaining close enough to the existing frameworks that existing analytic and learning tools can be applied with little or no adaption. Some simple and larger examples illustrate the similarities and differences of this approach, and a discussion of the challenges inherent in developing more flexible tools to exploit these new frameworks concludes matters

    Mapping UML models incorporating OCL constraints into object-Z

    Get PDF
    Focusing on object-oriented designs, this paper proposes a mapping for translating systems modelled in the Unified Modelling Language (UML) incorporating Object Constraint Language (OCL) constraints into formal software specifications in Object-Z. Joint treatment of semi-formal model constructs and constraints within a single translation framework and conversion tool is novel, and leads to the generation of much richer formal specifications than is otherwise possible. This paper complements previous analyses by paying particular attention to the generation of complete Object-Z structures. Integration of proposals to extend the OCL to include action constraints also boosts the expressivity of the translated specifications. The main features of a tool support are described

    Search space expansion for efficient incremental inductive logic programming from streamed data

    Get PDF
    In the past decade, several systems for learning Answer Set Programs (ASP) have been proposed, including the recent FastLAS system. Compared to other state-of-the-art approaches to learning ASP, FastLAS is more scalable, as rather than computing the hypothesis space in full, it computes a much smaller subset relative to a given set of examples that is nonetheless guaranteed to contain an optimal solution to the task (called an OPT-sufficient subset). On the other hand, like many other Inductive Logic Programming (ILP) systems, FastLAS is designed to be run on a fixed learning task meaning that if new examples are discovered after learning, the whole process must be run again. In many real applications, data arrives in a stream. Rerunning an ILP system from scratch each time new examples arrive is inefficient. In this paper we address this problem by presenting IncrementalLAS, a system that uses a new technique, called hypothesis space expansion, to enable a FastLAS-like OPT-sufficient subset to be expanded each time new examples are discovered. We prove that this preserves FastLAS's guarantee of finding an optimal solution to the full task (including the new examples), while removing the need to repeat previous computations. Through our evaluation, we demonstrate that running IncrementalLAS on tasks updated with sequences of new examples is significantly faster than re-running FastLAS from scratch on each updated task
    • …
    corecore