3,348 research outputs found

    Stretching demi-bits and nondeterministic-secure pseudorandomness

    Get PDF
    We develop the theory of cryptographic nondeterministic-secure pseudorandomness beyond the point reached by Rudich's original work [25], and apply it to draw new consequences in average-case complexity and proof complexity. Specifically, we show the following: Demi-bit stretch: Super-bits and demi-bits are variants of cryptographic pseudorandom generators which are secure against nondeterministic statistical tests [25]. They were introduced to rule out certain approaches to proving strong complexity lower bounds beyond the limitations set out by the Natural Proofs barrier of Razborov and Rudich [23]. Whether demi-bits are stretchable at all had been an open problem since their introduction. We answer this question affirmatively by showing that: every demi-bit b : {0, 1}n → {0, 1}n+1 can be stretched into sublinear many demi-bits b′: {0, 1}n → {0, 1}n+nc , for every constant 0 < c < 1. Average-case hardness: Using work by Santhanam [26], we apply our results to obtain new averagecase Kolmogorov complexity results: we show that Kpoly[n-O(1)] is zero-error average-case hard against NP/poly machines iff Kpoly[n-o(n)] is, where for a function s(n) : N → N, Kpoly[s(n)] denotes the languages of all strings x ∈ {0, 1}n for which there are (fixed) polytime Turing machines of description-length at most s(n) that output x. Characterising super-bits by nondeterministic unpredictability: In the deterministic setting, Yao [31] proved that super-polynomial hardness of pseudorandom generators is equivalent to ("nextbit") unpredictability. Unpredictability roughly means that given any strict prefix of a random string, it is infeasible to predict the next bit. We initiate the study of unpredictability beyond the deterministic setting (in the cryptographic regime), and characterise the nondeterministic hardness of generators from an unpredictability perspective. Specifically, we propose four stronger notions of unpredictability: NP/poly-unpredictability, coNP/poly-unpredictability, ∩-unpredictability and ∪unpredictability, and show that super-polynomial nondeterministic hardness of generators lies between ∩-unpredictability and ∪unpredictability. Characterising super-bits by nondeterministic hard-core predicates: We introduce a nondeterministic variant of hard-core predicates, called super-core predicates. We show that the existence of a super-bit is equivalent to the existence of a super-core of some non-shrinking function. This serves as an analogue of the equivalence between the existence of a strong pseudorandom generator and the existence of a hard-core of some one-way function [8, 12], and provides a first alternative characterisation of super-bits. We also prove that a certain class of functions, which may have hard-cores, cannot possess any super-core

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth

    Full text link
    Dynamic programming on various graph decompositions is one of the most fundamental techniques used in parameterized complexity. Unfortunately, even if we consider concepts as simple as path or tree decompositions, such dynamic programming uses space that is exponential in the decomposition's width, and there are good reasons to believe that this is necessary. However, it has been shown that in graphs of low treedepth it is possible to design algorithms which achieve polynomial space complexity without requiring worse time complexity than their counterparts working on tree decompositions of bounded width. Here, treedepth is a graph parameter that, intuitively speaking, takes into account both the depth and the width of a tree decomposition of the graph, rather than the width alone. Motivated by the above, we consider graphs that admit clique expressions with bounded depth and label count, or equivalently, graphs of low shrubdepth (sd). Here, sd is a bounded-depth analogue of cliquewidth, in the same way as td is a bounded-depth analogue of treewidth. We show that also in this setting, bounding the depth of the decomposition is a deciding factor for improving the space complexity. Precisely, we prove that on nn-vertex graphs equipped with a tree-model (a decomposition notion underlying sd) of depth dd and using kk labels, we can solve - Independent Set in time 2O(dk)nO(1)2^{O(dk)}\cdot n^{O(1)} using O(dk2logn)O(dk^2\log n) space; - Max Cut in time nO(dk)n^{O(dk)} using O(dklogn)O(dk\log n) space; and - Dominating Set in time 2O(dk)nO(1)2^{O(dk)}\cdot n^{O(1)} using nO(1)n^{O(1)} space via a randomized algorithm. We also establish a lower bound, conditional on a certain assumption about the complexity of Longest Common Subsequence, which shows that at least in the case of IS the exponent of the parametric factor in the time complexity has to grow with dd if one wishes to keep the space complexity polynomial.Comment: Conference version to appear at the European Symposium on Algorithms (ESA 2023

    Sociotechnical Imaginaries, the Future and the Third Offset Strategy

    Get PDF

    Investigating the learning potential of the Second Quantum Revolution: development of an approach for secondary school students

    Get PDF
    In recent years we have witnessed important changes: the Second Quantum Revolution is in the spotlight of many countries, and it is creating a new generation of technologies. To unlock the potential of the Second Quantum Revolution, several countries have launched strategic plans and research programs that finance and set the pace of research and development of these new technologies (like the Quantum Flagship, the National Quantum Initiative Act and so on). The increasing pace of technological changes is also challenging science education and institutional systems, requiring them to help to prepare new generations of experts. This work is placed within physics education research and contributes to the challenge by developing an approach and a course about the Second Quantum Revolution. The aims are to promote quantum literacy and, in particular, to value from a cultural and educational perspective the Second Revolution. The dissertation is articulated in two parts. In the first, we unpack the Second Quantum Revolution from a cultural perspective and shed light on the main revolutionary aspects that are elevated to the rank of principles implemented in the design of a course for secondary school students, prospective and in-service teachers. The design process and the educational reconstruction of the activities are presented as well as the results of a pilot study conducted to investigate the impact of the approach on students' understanding and to gather feedback to refine and improve the instructional materials. The second part consists of the exploration of the Second Quantum Revolution as a context to introduce some basic concepts of quantum physics. We present the results of an implementation with secondary school students to investigate if and to what extent external representations could play any role to promote students’ understanding and acceptance of quantum physics as a personal reliable description of the world

    Constant-Delay Enumeration for SLP-Compressed Documents

    Get PDF

    Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth

    Get PDF
    Dynamic programming on various graph decompositions is one of the most fundamental techniques used in parameterized complexity. Unfortunately, even if we consider concepts as simple as path or tree decompositions, such dynamic programming uses space that is exponential in the decomposition’s width, and there are good reasons to believe that this is necessary. However, it has been shown that in graphs of low treedepth it is possible to design algorithms which achieve polynomial space complexity without requiring worse time complexity than their counterparts working on tree decompositions of bounded width. Here, treedepth is a graph parameter that, intuitively speaking, takes into account both the depth and the width of a tree decomposition of the graph, rather than the width alone. Motivated by the above, we consider graphs that admit clique expressions with bounded depth and label count, or equivalently, graphs of low shrubdepth. Here, shrubdepth is a bounded-depth analogue of cliquewidth, in the same way as treedepth is a bounded-depth analogue of treewidth. We show that also in this setting, bounding the depth of the decomposition is a deciding factor for improving the space complexity. More precisely, we prove that on n-vertex graphs equipped with a tree-model (a decomposition notion underlying shrubdepth) of depth d and using k labels, - Independent Set can be solved in time 2^(dk) ⋅ n^(1) using (dk²log n) space; - Max Cut can be solved in time n^(dk) using (dk log n) space; and - Dominating Set can be solved in time 2^(dk) ⋅ n^(1) using n^(1) space via a randomized algorithm. We also establish a lower bound, conditional on a certain assumption about the complexity of Longest Common Subsequence, which shows that at least in the case of Independent Set the exponent of the parametric factor in the time complexity has to grow with d if one wishes to keep the space complexity polynomial

    Evaluating Architectural Safeguards for Uncertain AI Black-Box Components

    Get PDF
    Although tremendous progress has been made in Artificial Intelligence (AI), it entails new challenges. The growing complexity of learning tasks requires more complex AI components, which increasingly exhibit unreliable behaviour. In this book, we present a model-driven approach to model architectural safeguards for AI components and analyse their effect on the overall system reliability

    Strong Invariants Are Hard: On the Hardness of Strongest Polynomial Invariants for (Probabilistic) Programs

    Full text link
    We show that computing the strongest polynomial invariant for single-path loops with polynomial assignments is at least as hard as the Skolem problem, a famous problem whose decidability has been open for almost a century. While the strongest polynomial invariants are computable for affine loops, for polynomial loops the problem remained wide open. As an intermediate result of independent interest, we prove that reachability for discrete polynomial dynamical systems is Skolem-hard as well. Furthermore, we generalize the notion of invariant ideals and introduce moment invariant ideals for probabilistic programs. With this tool, we further show that the strongest polynomial moment invariant is (i) uncomputable, for probabilistic loops with branching statements, and (ii) Skolem-hard to compute for polynomial probabilistic loops without branching statements. Finally, we identify a class of probabilistic loops for which the strongest polynomial moment invariant is computable and provide an algorithm for it
    corecore