59 research outputs found

    Precise Multi-Neuron Abstractions for Neural Network Certification

    Full text link
    Formal verification of neural networks is critical for their safe adoption in real-world applications. However, designing a verifier which can handle realistic networks in a precise manner remains an open and difficult challenge. In this paper, we take a major step in addressing this challenge and present a new framework, called PRIMA, that computes precise convex approximations of arbitrary non-linear activations. PRIMA is based on novel approximation algorithms that compute the convex hull of polytopes, leveraging concepts from computational geometry. The algorithms have polynomial complexity, yield fewer constraints, and minimize precision loss. We evaluate the effectiveness of PRIMA on challenging neural networks with ReLU, Sigmoid, and Tanh activations. Our results show that PRIMA is significantly more precise than the state-of-the-art, verifying robustness for up to 16%, 30%, and 34% more images than prior work on ReLU-, Sigmoid-, and Tanh-based networks, respectively

    Characterizations of discrete Sugeno integrals as polynomial functions over distributive lattices

    Get PDF
    We give several characterizations of discrete Sugeno integrals over bounded distributive lattices, as particular cases of lattice polynomial functions, that is, functions which can be represented in the language of bounded lattices using variables and constants. We also consider the subclass of term functions as well as the classes of symmetric polynomial functions and weighted minimum and maximum functions, and present their characterizations, accordingly. Moreover, we discuss normal form representations of these functions

    Threshold functions and Poisson convergence for systems of equations in random sets

    Get PDF
    We present a unified framework to study threshold functions for the existence of solutions to linear systems of equations in random sets which includes arithmetic progressions, sum-free sets, Bh[g]B_{h}[g]-sets and Hilbert cubes. In particular, we show that there exists a threshold function for the property "A\mathcal{A} contains a non-trivial solution of Mx=0M\cdot\textbf{x}=\textbf{0}", where A\mathcal{A} is a random set and each of its elements is chosen independently with the same probability from the interval of integers {1,,n}\{1,\dots,n\}. Our study contains a formal definition of trivial solutions for any combinatorial structure, extending a previous definition by Ruzsa when dealing with a single equation. Furthermore, we study the behaviour of the distribution of the number of non-trivial solutions at the threshold scale. We show that it converges to a Poisson distribution whose parameter depends on the volumes of certain convex polytopes arising from the linear system under study as well as the symmetry inherent in the structures, which we formally define and characterize.Comment: New version with minor corrections and changes in notation. 24 Page

    A Cognitive Robotic Imitation Learning System Based On Cause-Effect Reasoning

    Get PDF
    As autonomous systems become more intelligent and ubiquitous, it is increasingly important that their behavior can be easily controlled and understood by human end users. Robotic imitation learning has emerged as a useful paradigm for meeting this challenge. However, much of the research in this area focuses on mimicking the precise low-level motor control of a demonstrator, rather than interpreting the intentions of a demonstrator at a cognitive level, which limits the ability of these systems to generalize. In particular, cause-effect reasoning is an important component of human cognition that is under-represented in these systems. This dissertation contributes a novel framework for cognitive-level imitation learning that uses parsimonious cause-effect reasoning to generalize demonstrated skills, and to justify its own actions to end users. The contributions include new causal inference algorithms, which are shown formally to be correct and have reasonable computational complexity characteristics. Additionally, empirical validations both in simulation and on board a physical robot show that this approach can efficiently and often successfully infer a demonstrator’s intentions on the basis of a single demonstration, and can generalize learned skills to a variety of new situations. Lastly, computer experiments are used to compare several formal criteria of parsimony in the context of causal intention inference, and a new criterion proposed in this work is shown to compare favorably with more traditional ones. In addition, this dissertation takes strides towards a purely neurocomputational implementation of this causally-driven imitation learning framework. In particular, it contributes a novel method for systematically locating fixed points in recurrent neural networks. Fixed points are relevant to recent work on neural networks that can be “programmed” to exhibit cognitive-level behaviors, like those involved in the imitation learning system developed here. As such, the fixed point solver developed in this work is a tool that can be used to improve our engineering and understanding of neurocomputational cognitive control in the next generation of autonomous systems, ultimately resulting in systems that are more pliable and transparent

    Master index of volumes 61–70

    Get PDF

    Configuration structures, event structures and Petri nets

    Get PDF
    In this paper the correspondence between safe Petri nets and event structures, due to Nielsen, Plotkin and Winskel, is extended to arbitrary nets without self-loops, under the collective token interpretation. To this end we propose a more general form of event structure, matching the expressive power of such nets. These new event structures and nets are connected by relating both notions with configuration structures, which can be regarded as representations of either event structures or nets that capture their behaviour in terms of action occurrences and the causal relationships between them, but abstract from any auxiliary structure. A configuration structure can also be considered logically, as a class of propositional models, or—equivalently— as a propositional theory in disjunctive normal from. Converting this theory to conjunctive normal form is the ke
    corecore