24,298 research outputs found

    EXTREMELY UNIFORM BRANCHING PROGRAMS

    Get PDF
    We propose a new descriptive complexity notion of uniformity for branching programs solving problems defined on structured data. We observe that FO[=]-uniform (n-way) branching programs are unable to solve the tree evaluation problem studied by Cook, McKenzie, Wehr, Braverman and Santhanam [8] because such programs possess a variant of their thriftiness property. Similarly, FO[=]-uniform (n-way) branching programs are unable to solve the P-complete GEN problem because such programs possess the incremental property studied by Gál, Kouck´y and McKenzie [10]. 1

    Affine Determinant Programs: A Framework for Obfuscation and Witness Encryption

    Get PDF
    An affine determinant program ADP: {0,1}^n → {0,1} is specified by a tuple (A,B_1,...,B_n) of square matrices over F_q and a function Eval: F_q → {0,1}, and evaluated on x \in {0,1}^n by computing Eval(det(A + sum_{i \in [n]} x_i B_i)). In this work, we suggest ADPs as a new framework for building general-purpose obfuscation and witness encryption. We provide evidence to suggest that constructions following our ADP-based framework may one day yield secure, practically feasible obfuscation. As a proof-of-concept, we give a candidate ADP-based construction of indistinguishability obfuscation (iO) for all circuits along with a simple witness encryption candidate. We provide cryptanalysis demonstrating that our schemes resist several potential attacks, and leave further cryptanalysis to future work. Lastly, we explore practically feasible applications of our witness encryption candidate, such as public-key encryption with near-optimal key generation

    Pseudorandom Generators for Width-3 Branching Programs

    Full text link
    We construct pseudorandom generators of seed length O~(log(n)log(1/ϵ))\tilde{O}(\log(n)\cdot \log(1/\epsilon)) that ϵ\epsilon-fool ordered read-once branching programs (ROBPs) of width 33 and length nn. For unordered ROBPs, we construct pseudorandom generators with seed length O~(log(n)poly(1/ϵ))\tilde{O}(\log(n) \cdot \mathrm{poly}(1/\epsilon)). This is the first improvement for pseudorandom generators fooling width 33 ROBPs since the work of Nisan [Combinatorica, 1992]. Our constructions are based on the `iterated milder restrictions' approach of Gopalan et al. [FOCS, 2012] (which further extends the Ajtai-Wigderson framework [FOCS, 1985]), combined with the INW-generator [STOC, 1994] at the last step (as analyzed by Braverman et al. [SICOMP, 2014]). For the unordered case, we combine iterated milder restrictions with the generator of Chattopadhyay et al. [CCC, 2018]. Two conceptual ideas that play an important role in our analysis are: (1) A relabeling technique allowing us to analyze a relabeled version of the given branching program, which turns out to be much easier. (2) Treating the number of colliding layers in a branching program as a progress measure and showing that it reduces significantly under pseudorandom restrictions. In addition, we achieve nearly optimal seed-length O~(log(n/ϵ))\tilde{O}(\log(n/\epsilon)) for the classes of: (1) read-once polynomials on nn variables, (2) locally-monotone ROBPs of length nn and width 33 (generalizing read-once CNFs and DNFs), and (3) constant-width ROBPs of length nn having a layer of width 22 in every consecutive polylog(n)\mathrm{poly}\log(n) layers.Comment: 51 page

    LF-PPL: A Low-Level First Order Probabilistic Programming Language for Non-Differentiable Models

    Full text link
    We develop a new Low-level, First-order Probabilistic Programming Language (LF-PPL) suited for models containing a mix of continuous, discrete, and/or piecewise-continuous variables. The key success of this language and its compilation scheme is in its ability to automatically distinguish parameters the density function is discontinuous with respect to, while further providing runtime checks for boundary crossings. This enables the introduction of new inference engines that are able to exploit gradient information, while remaining efficient for models which are not everywhere differentiable. We demonstrate this ability by incorporating a discontinuous Hamiltonian Monte Carlo (DHMC) inference engine that is able to deliver automated and efficient inference for non-differentiable models. Our system is backed up by a mathematical formalism that ensures that any model expressed in this language has a density with measure zero discontinuities to maintain the validity of the inference engine.Comment: Published in the proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS

    Learning to Optimize Computational Resources: Frugal Training with Generalization Guarantees

    Full text link
    Algorithms typically come with tunable parameters that have a considerable impact on the computational resources they consume. Too often, practitioners must hand-tune the parameters, a tedious and error-prone task. A recent line of research provides algorithms that return nearly-optimal parameters from within a finite set. These algorithms can be used when the parameter space is infinite by providing as input a random sample of parameters. This data-independent discretization, however, might miss pockets of nearly-optimal parameters: prior research has presented scenarios where the only viable parameters lie within an arbitrarily small region. We provide an algorithm that learns a finite set of promising parameters from within an infinite set. Our algorithm can help compile a configuration portfolio, or it can be used to select the input to a configuration algorithm for finite parameter spaces. Our approach applies to any configuration problem that satisfies a simple yet ubiquitous structure: the algorithm's performance is a piecewise constant function of its parameters. Prior research has exhibited this structure in domains from integer programming to clustering

    Polynomial tuning of multiparametric combinatorial samplers

    Full text link
    Boltzmann samplers and the recursive method are prominent algorithmic frameworks for the approximate-size and exact-size random generation of large combinatorial structures, such as maps, tilings, RNA sequences or various tree-like structures. In their multiparametric variants, these samplers allow to control the profile of expected values corresponding to multiple combinatorial parameters. One can control, for instance, the number of leaves, profile of node degrees in trees or the number of certain subpatterns in strings. However, such a flexible control requires an additional non-trivial tuning procedure. In this paper, we propose an efficient polynomial-time, with respect to the number of tuned parameters, tuning algorithm based on convex optimisation techniques. Finally, we illustrate the efficiency of our approach using several applications of rational, algebraic and P\'olya structures including polyomino tilings with prescribed tile frequencies, planar trees with a given specific node degree distribution, and weighted partitions.Comment: Extended abstract, accepted to ANALCO2018. 20 pages, 6 figures, colours. Implementation and examples are available at [1] https://github.com/maciej-bendkowski/boltzmann-brain [2] https://github.com/maciej-bendkowski/multiparametric-combinatorial-sampler
    corecore