63 research outputs found

    Oracles Are Subtle But Not Malicious

    Full text link
    Theoretical computer scientists have been debating the role of oracles since the 1970's. This paper illustrates both that oracles can give us nontrivial insights about the barrier problems in circuit complexity, and that they need not prevent us from trying to solve those problems. First, we give an oracle relative to which PP has linear-sized circuits, by proving a new lower bound for perceptrons and low- degree threshold polynomials. This oracle settles a longstanding open question, and generalizes earlier results due to Beigel and to Buhrman, Fortnow, and Thierauf. More importantly, it implies the first nonrelativizing separation of "traditional" complexity classes, as opposed to interactive proof classes such as MIP and MA-EXP. For Vinodchandran showed, by a nonrelativizing argument, that PP does not have circuits of size n^k for any fixed k. We present an alternative proof of this fact, which shows that PP does not even have quantum circuits of size n^k with quantum advice. To our knowledge, this is the first nontrivial lower bound on quantum circuit size. Second, we study a beautiful algorithm of Bshouty et al. for learning Boolean circuits in ZPP^NP. We show that the NP queries in this algorithm cannot be parallelized by any relativizing technique, by giving an oracle relative to which ZPP^||NP and even BPP^||NP have linear-size circuits. On the other hand, we also show that the NP queries could be parallelized if P=NP. Thus, classes such as ZPP^||NP inhabit a "twilight zone," where we need to distinguish between relativizing and black-box techniques. Our results on this subject have implications for computational learning theory as well as for the circuit minimization problem.Comment: 20 pages, 1 figur

    Pseudorandomness for Approximate Counting and Sampling

    Get PDF
    We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent. We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice

    The Future of Computation

    Full text link
    ``The purpose of life is to obtain knowledge, use it to live with as much satisfaction as possible, and pass it on with improvements and modifications to the next generation.'' This may sound philosophical, and the interpretation of words may be subjective, yet it is fairly clear that this is what all living organisms--from bacteria to human beings--do in their life time. Indeed, this can be adopted as the information theoretic definition of life. Over billions of years, biological evolution has experimented with a wide range of physical systems for acquiring, processing and communicating information. We are now in a position to make the principles behind these systems mathematically precise, and then extend them as far as laws of physics permit. Therein lies the future of computation, of ourselves, and of life.Comment: 7 pages, Revtex. Invited lecture at the Workshop on Quantum Information, Computation and Communication (QICC-2005), IIT Kharagpur, India, February 200

    A Polyhedral Approximation Framework for Convex and Robust Distributed Optimization

    Full text link
    In this paper we consider a general problem set-up for a wide class of convex and robust distributed optimization problems in peer-to-peer networks. In this set-up convex constraint sets are distributed to the network processors who have to compute the optimizer of a linear cost function subject to the constraints. We propose a novel fully distributed algorithm, named cutting-plane consensus, to solve the problem, based on an outer polyhedral approximation of the constraint sets. Processors running the algorithm compute and exchange linear approximations of their locally feasible sets. Independently of the number of processors in the network, each processor stores only a small number of linear constraints, making the algorithm scalable to large networks. The cutting-plane consensus algorithm is presented and analyzed for the general framework. Specifically, we prove that all processors running the algorithm agree on an optimizer of the global problem, and that the algorithm is tolerant to node and link failures as long as network connectivity is preserved. Then, the cutting plane consensus algorithm is specified to three different classes of distributed optimization problems, namely (i) inequality constrained problems, (ii) robust optimization problems, and (iii) almost separable optimization problems with separable objective functions and coupling constraints. For each one of these problem classes we solve a concrete problem that can be expressed in that framework and present computational results. That is, we show how to solve: position estimation in wireless sensor networks, a distributed robust linear program and, a distributed microgrid control problem.Comment: submitted to IEEE Transactions on Automatic Contro

    Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems

    Get PDF
    A frontier open problem in circuit complexity is to prove P^{NP} is not in SIZE[n^k] for all k; this is a necessary intermediate step towards NP is not in P_{/poly}. Previously, for several classes containing P^{NP}, including NP^{NP}, ZPP^{NP}, and S_2 P, such lower bounds have been proved via Karp-Lipton-style Theorems: to prove C is not in SIZE[n^k] for all k, we show that C subset P_{/poly} implies a "collapse" D = C for some larger class D, where we already know D is not in SIZE[n^k] for all k. It seems obvious that one could take a different approach to prove circuit lower bounds for P^{NP} that does not require proving any Karp-Lipton-style theorems along the way. We show this intuition is wrong: (weak) Karp-Lipton-style theorems for P^{NP} are equivalent to fixed-polynomial size circuit lower bounds for P^{NP}. That is, P^{NP} is not in SIZE[n^k] for all k if and only if (NP subset P_{/poly} implies PH subset i.o.- P^{NP}_{/n}). Next, we present new consequences of the assumption NP subset P_{/poly}, towards proving similar results for NP circuit lower bounds. We show that under the assumption, fixed-polynomial circuit lower bounds for NP, nondeterministic polynomial-time derandomizations, and various fixed-polynomial time simulations of NP are all equivalent. Applying this equivalence, we show that circuit lower bounds for NP imply better Karp-Lipton collapses. That is, if NP is not in SIZE[n^k] for all k, then for all C in {Parity-P, PP, PSPACE, EXP}, C subset P_{/poly} implies C subset i.o.-NP_{/n^epsilon} for all epsilon > 0. Note that unconditionally, the collapses are only to MA and not NP. We also explore consequences of circuit lower bounds for a sparse language in NP. Among other results, we show if a polynomially-sparse NP language does not have n^{1+epsilon}-size circuits, then MA subset i.o.-NP_{/O(log n)}, MA subset i.o.-P^{NP[O(log n)]}, and NEXP is not in SIZE[2^{o(m)}]. Finally, we observe connections between these results and the "hardness magnification" phenomena described in recent works

    High-level characteristics of or-and independent and-parallelism in prolog

    Get PDF
    Although studies of a number of parallel implementations of logic programming languages are now available, their results are difficult to interpret due to the multiplicity of factors involved, the effect of each of which is difficult to sepárate. In this paper we present the results of a high-level simulation study of or- and independent and-parallelism with a wide selection of Prolog programs that aims to determine the intrinsic amount of parallelism, independently of implementation factors, thus facilitating this separation. We expect this study will be instrumental in better understanding and comparing results from actual implementations, as shown by some examples provided in the paper. In addition, the paper examines some of the issues and tradeoffs associated with the combination of and- and or-parallelism and proposes reasonable solutions based on the simulation data obtained

    Attacking post-quantum cryptography

    Get PDF

    Attacking post-quantum cryptography

    Get PDF

    Bayesian Network Approximation from Local Structures

    Get PDF
    This work is focused on the problem of Bayesian network structure learning. There are two main areas in this field which are here discussed.The first area is a theoretical one. We consider some aspects of the Bayesian network structure learning hardness. In particular we prove that the problem of finding a Bayesian network structure with a minimal number of edges encoding the joint probability distribution of a given dataset is NP-hard. This result can be considered as a significantly different than the standard one view on the NP-hardness of the Bayesian network structure learning. The most notable so far results in this area are focused mainly on the specific characterization of the problem, where the aim is to find a Bayesian network structure maximizing some given probabilistic criterion. These criteria arise from quite advanced considerations in the area of statistics, and in particular their interpretation might be not intuitive---especially for the people not familiar with the Bayesian networks domain. In contrary the proposed here criterion, for which the NP-hardness is proved, does not require any advanced knowledge and it can be easily understandable.The second area is related to concrete algorithms. We focus on one of the most interesting branch in history of Bayesian network structure learning methods, leading to a very significant solutions. Namely we consider the branch of local Bayesian network structure learning methods, where the main aim is to gather first of all some information describing local properties of constructed networks, and then use this information appropriately in order to construct the whole network structure. The algorithm which is the root of this branch is focused on the important local characterization of Bayesian networks---so called Markov blankets. The Markov blanket of a given attribute consists of such other attributes which in the probabilistic sense correspond to the maximal in strength and minimal in size set of its causes. The aforementioned first algorithm in the considered here branch is based on one important observation. Subject to appropriate assumptions it is possible to determine the optimal Bayesian network structure by examining relations between attributes only within the Markov blankets. In the case of datasets derived from appropriately sparse distributions, where Markov blanket of each attribute has a limited by some common constant size, such procedure leads to a well time scalable Bayesian network structure learning approach.The Bayesian network local learning branch has mainly evolved in direction of reducing the gathered local information into even smaller and more reliably learned patterns. This reduction has raised from the parallel progress in the Markov blankets approximation field.The main result of this dissertation is the proposal of Bayesian network structure learning procedure which can be placed into the branch of local learning methods and which leads to the fork in its root in fact. The fundamental idea is to appropriately aggregate learned over the Markov blankets local knowledge not in the form of derived dependencies within these blankets---as it happens in the root method, but in the form of local Bayesian networks. The user can thanks to this have much influence on the character of this local knowledge---by choosing appropriate to his needs Bayesian network structure learning method used in order to learn the local structures. The merging approach of local structures into a global one is justified theoretically and evaluated empirically, showing its ability to enhance even very advanced Bayesian network structure learning algorithms, when applying them locally in the proposed scheme.Praca ta skupia się na problemie uczenia struktury sieci bayesowskiej. Są dwa główne pola w tym temacie, które są tutaj omówione.Pierwsze pole ma charakter teoretyczny. Rozpatrujemy pewne aspekty trudności uczenia struktury sieci bayesowskiej. W szczególności pokozujemy, że problem wyznaczenia struktury sieci bayesowskiej o minimalnej liczbie krawędzi kodującej w sobie łączny rozkład prawdopodobieństwa atrybutów danej tabeli danych jest NP-trudny. Rezultat ten może być postrzegany jako istotnie inne od standardowego spojrzenie na NP-trudność uczenia struktury sieci bayesowskiej. Najbardziej znaczące jak dotąd rezultaty w tym zakresie skupiają się głównie na specyficznej charakterystyce problemu, gdzie celem jest wyznaczenie struktury sieci bayesowskiej maksymalizującej pewne zadane probabilistyczne kryterium. Te kryteria wywodzą się z dość zaawansowanych rozważań w zakresie statystyki i w szczególności mogą nie być intuicyjne---szczególnie dla ludzi niezaznajomionych z dziedziną sieci bayesowskich. W przeciwieństwie do tego zaproponowane tutaj kryterium, dla którego została wykazana NP-trudność, nie wymaga żadnej zaawansowanej wiedzy i może być łatwo zrozumiane.Drugie pole wiąże się z konkretnymi algorytmami. Skupiamy się na jednej z najbardziej interesujących gałęzi w historii metod uczenia struktur sieci bayesowskich, prowadzącej do bardzo znaczących rozwiązań. Konkretnie rozpatrujemy gałąź metod lokalnego uczenia struktur sieci bayesowskich, gdzie głównym celem jest zebranie w pierwszej kolejności pewnych informacji opisujących lokalne własności konstruowanych sieci, a następnie użycie tych informacji w odpowiedni sposób celem konstrukcji pełnej struktury sieci. Algorytm będący korzeniem tej gałęzi skupia się na ważnej lokalnej charakteryzacji sieci bayesowskich---tak zwanych kocach Markowa. Koc Markowa dla zadanego atrybutu składa się z tych pozostałych atrybutów, które w sensie probabilistycznym odpowiadają maksymalnymu w sile i minimalnemu w rozmiarze zbiorowi jego przyczyn. Wspomniany pierwszy algorytm w rozpatrywanej tu gałęzi opiera się na jednej istotnej obserwacji. Przy odpowiednich założeniach możliwe jest wyznaczenie optymalnej struktury sieci bayesowskiej poprzez badanie relacji między atrybutami jedynie w obrębie koców Markowa. W przypadku zbiorów danych wywodzących się z odpowiednio rzadkiego rozkładu, gdzie koc Markowa każdego atrybutu ma ograniczony przez pewną wspólną stałą rozmiar, taka procedura prowadzi do dobrze skalowalnego czasowo podejścia uczenia struktury sieci bayesowskiej.Gałąź lokalnego uczenia sieci bayesowskich rozwinęła się głównie w kierunku redukcji zbieranych lokalnych informacji do jeszcze mniejszych i bardziej niezawodnie wyuczanych wzorców. Redukcja ta wyrosła na bazie równoległego rozwoju w dziedzinie aproksymacji koców Markowa.Głównym rezultatem tej rozprawy jest zaproponowanie procedury uczenia struktury sieci bayesowskiej, która może być umiejscowiona w gałęzi metod lokalnego uczenia i która faktycznie wyznacza rozgałęzienie w jego korzeniu. Fundamentalny pomysł polega tu na tym, żeby odpowiednio agregować wyuczoną w obrębie koców Markowa lokalną wiedzę nie w formie wyprowadzonych zależności w obrębie tych koców---tak jak to się dzieje w przypadku metody - korzenia, ale w formie lokalnych sieci bayesowskich. Użytkownik może mieć dzięki temu duży wpływ na charakter tej lokalnej wiedzy---poprzez wybór odpowiedniej dla jego potrzeb metody uczenia struktury sieci bayesowskiej użytej w celu wyznaczenia lokalnych struktur. Procedura scalenia lokalnych modeli celem utworzenia globalnego jest uzasadniona teoretycznie oraz zbadana eksperymentalnie, pokazując jej zdolność do poprawienia nawet bardzo zaawansowanych algorytmów uczenia struktury sieci bayesowskiej, gdy zastosuje się je lokalnie w ramach zaproponowanego schematu
    corecore