133 research outputs found

    Boosting-based Construction of BDDs for Linear Threshold Functions and Its Application to Verification of Neural Networks

    Full text link
    Understanding the characteristics of neural networks is important but difficult due to their complex structures and behaviors. Some previous work proposes to transform neural networks into equivalent Boolean expressions and apply verification techniques for characteristics of interest. This approach is promising since rich results of verification techniques for circuits and other Boolean expressions can be readily applied. The bottleneck is the time complexity of the transformation. More precisely, (i) each neuron of the network, i.e., a linear threshold function, is converted to a Binary Decision Diagram (BDD), and (ii) they are further combined into some final form, such as Boolean circuits. For a linear threshold function with nn variables, an existing method takes O(n2n2)O(n2^{\frac{n}{2}}) time to construct an ordered BDD of size O(2n2)O(2^{\frac{n}{2}}) consistent with some variable ordering. However, it is non-trivial to choose a variable ordering producing a small BDD among n!n! candidates. We propose a method to convert a linear threshold function to a specific form of a BDD based on the boosting approach in the machine learning literature. Our method takes O(2npoly(1/ρ))O(2^n \text{poly}(1/\rho)) time and outputs BDD of size O(n2ρ4ln1ρ)O(\frac{n^2}{\rho^4}\ln{\frac{1}{\rho}}), where ρ\rho is the margin of some consistent linear threshold function. Our method does not need to search for good variable orderings and produces a smaller expression when the margin of the linear threshold function is large. More precisely, our method is based on our new boosting algorithm, which is of independent interest. We also propose a method to combine them into the final Boolean expression representing the neural network

    08381 Abstracts Collection -- Computational Complexity of Discrete Problems

    Get PDF
    From the 14th of September to the 19th of September, the Dagstuhl Seminar 08381 ``Computational Complexity of Discrete Problems\u27\u27 was held in Schloss Dagstuhl - Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work as well as open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this report. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Potential of quantum finite automata with exact acceptance

    Full text link
    The potential of the exact quantum information processing is an interesting, important and intriguing issue. For examples, it has been believed that quantum tools can provide significant, that is larger than polynomial, advantages in the case of exact quantum computation only, or mainly, for problems with very special structures. We will show that this is not the case. In this paper the potential of quantum finite automata producing outcomes not only with a (high) probability, but with certainty (so called exactly) is explored in the context of their uses for solving promise problems and with respect to the size of automata. It is shown that for solving particular classes {An}n=1\{A^n\}_{n=1}^{\infty} of promise problems, even those without some very special structure, that succinctness of the exact quantum finite automata under consideration, with respect to the number of (basis) states, can be very small (and constant) though it grows proportional to nn in the case deterministic finite automata (DFAs) of the same power are used. This is here demonstrated also for the case that the component languages of the promise problems solvable by DFAs are non-regular. The method used can be applied in finding more exact quantum finite automata or quantum algorithms for other promise problems.Comment: We have improved the presentation of the paper. Accepted to International Journal of Foundation of Computer Scienc

    Complexity classifications for different equivalence and audit problems for Boolean circuits

    Get PDF
    We study Boolean circuits as a representation of Boolean functions and consider different equivalence, audit, and enumeration problems. For a number of restricted sets of gate types (bases) we obtain efficient algorithms, while for all other gate types we show these problems are at least NP-hard.Comment: 25 pages, 1 figur

    From Dust to Dawn: Practically Efficient Two-Party Secure Function Evaluation Protocols and their Modular Design

    Get PDF
    General two-party Secure Function Evaluation (SFE) allows mutually distrusting parties to (jointly) correctly compute \emph{any} function on their private input data, without revealing the inputs. SFE, properly designed, guarantees to satisfy the most stringent security requirements, even for interactive computation. Two-party SFE can benefit almost any client-server interaction where privacy is required, such as privacy-preserving credit checking, medical classification, or face recognition. Today, SFE is subject of an immense amount of research in a variety of directions, and is not easy to navigate. In this paper, we systematize the most \emph{practically important} work of the vast research knowledge on \emph{general} SFE. It turns out that the most efficient SFE protocols today are obtained by combining several basic techniques, such as garbled circuits and homomorphic encryption. We limit our detailed discussion to efficient general techniques. In particular, we do not discuss the details of currently \emph{practically inefficient} techniques, such as fully homomorphic encryption (although we elaborate on its practical relevance), nor do we cover \emph{specialized} techniques applicable only to small classes of functions. As an important practical contribution, we present a framework in which today\u27s practically most efficient techniques for general SFE can be viewed as building blocks with well-defined interfaces that can be easily combined to establish a complete efficient solution. Further, our approach naturally lends itself to automated protocol generation (compilation). This is evidenced by the implementation of (parts of) our framework in the TASTY SFE compiler (introduced at ACM CCS 2010). In sum, our work is positioned as a comprehensive guide in state-of-the-art SFE, with the additional goal of extracting, systematizing and unifying the most relevant and promising general techniques from among the mass of SFE knowledge. We hope this guide would help developers of SFE libraries and privacy-preserving protocols in selecting the most efficient SFE components available today

    Three Modern Roles for Logic in AI

    Full text link
    We consider three modern roles for logic in artificial intelligence, which are based on the theory of tractable Boolean circuits: (1) logic as a basis for computation, (2) logic for learning from a combination of data and knowledge, and (3) logic for reasoning about the behavior of machine learning systems.Comment: To be published in PODS 202

    Complexity Theory

    Get PDF
    [no abstract available
    corecore