13 research outputs found

    The complexity of joint computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 253-266).Joint computation is the ubiquitous scenario in which a computer is presented with not one, but many computational tasks to perform. A fundamental question arises: when can we cleverly combine computations, to perform them with greater efficiency or reliability than by tackling them separately? This thesis investigates the power and, especially, the limits of efficient joint computation, in several computational models: query algorithms, circuits, and Turing machines. We significantly improve and extend past results on limits to efficient joint computation for multiple independent tasks; identify barriers to progress towards better circuit lower bounds for multiple-output operators; and begin an original line of inquiry into the complexity of joint computation. In more detail, we make contributions in the following areas: Improved direct product theorems for randomized query complexity: The "direct product problem" seeks to understand how the difficulty of computing a function on each of k independent inputs scales with k. We prove the following direct product theorem (DPT) for query complexity: if every T-query algorithm has success probability at most 1-[epsilon] in computing the Boolean function f on input distribution [mu], then for [alpha] 0, the worst-case success probability of any [alpha]R₂(f)k-query randomized algorithm for f k falls exponentially with k. The best previous statement of this type, due to Klauck, Spalek, and de Wolf, required a query bound of O(bs(f)k). Our proof technique involves defining and analyzing a collection of martingales associated with an algorithm attempting to solve f*k. Our method is quite general and yields a new XOR lemma and threshold DPT for the query model, as well as DPTs for the query complexity of learning tasks, search problems, and tasks involving interaction with dynamic entities. We also give a version of our DPT in which decision tree size is the resource of interest. Joint complexity in the Decision Tree Model: We study the diversity of possible behaviors of the joint computational complexity of a collection f1,... , fk of Boolean functions over a shared input. We focus on the deterministic decision tree model, with depth as the complexity measure; in this model, we prove a result to the effect that the "obvious" constraints on joint computational complexity are essentially the only ones. The proof uses an intriguing new type of cryptographic data structure called a "mystery bin," which we construct using a polynomial separation between deterministic and unambiguous query complexity shown by SavickĂœ. We also pose a conjecture in the communication model which, if proved, would extend our result to that model. Limitations of Lower-Bound Methods for the Wire Complexity of Boolean Operators: We study the circuit complexity of Boolean operators, i.e., collections of Boolean functions defined over a common input. Our focus is the well-studied model in which arbitrary Boolean functions are allowed as gates, and in which a circuit's complexity is measured by its depth and number of wires. We show sharp limitations of several existing lower-bound methods for this model. First, we study an information-theoretic lower-bound method due to Cherukhin, which gave the first improvement over the lower bounds provided by the well-known superconcentrator technique for constant depths. (The lower bounds are still barelysuperlinear, however) Cherukhin's method was formalized by Jukna as a general lower-bound criterion for Boolean operators, the "Strong Multiscale Entropy" (SME) property. It seemed plausible that this property could imply significantly better lower bounds by an improved analysis. However, we show that this is not the case, by exhibiting an explicit operator with the SME property that is computable in constant depths whose wire-complexity essentially matches the Cherukhin-Jukna lower bound (to within a constant multiplicative factor, for depths d = 2,3 and for even depths d >/= 6). Next, we show limitations of two simpler lower-bound criteria given by Jukna: the "entropy method" for general operators, and the "pairwise-distance method" for linear operators. We show that neither method gives super-linear lower bounds for depth 3. In the process, we obtain the first known polynomial separation between the depth-2 and depth-3 wire complexities for an explicit operator. We also continue the study (initiated by Jukna) of the complexity of "representing" a linear operator by bounded-depth circuits, a weaker notion than computing the operator. New limits to classical and quantum instance compression: Given an instance of a decision problem that is too difficult to solve outright, we may aim for the more limited goal of compressing that instance into a smaller, equivalent instance of the same or a different problem. As a representative problem, say we are given Boolean formulas [psi]1,... ,[psi]t, each of length n << t, and we want to determine if at least one [psi]j is satisfiable. Can we efficiently reduce this "OR-SAT" question to an equivalent problem instance (of SAT or another problem) of size poly(n), independent of t? We call any such reduction a "strong compression" reduction for OR-SAT. This would amount to a major gain from compressing [psi]1,. .. , [psi]t jointly, since we know of no way to reliably compress an individual SAT instance. Harnik and Naor (FOCS '06/SICOMP '10) and Bodlaender, Downey, Fellows, and Hermelin (ICALP '08/JCSS '09) showed that the infeasibility of strong compression for OR-SAT would also imply limits to instance compression schemes for a large number of other, natural problems; this is significant because instance compression is a central technique in the design of so-called fixed-parameter tractable algorithms. Bodlaender et al. also showed that the infeasibility of strong compression for the analogous "AND-SAT" problem would establish limits to instance compression for another family of problems. Fortnow and Santhanam (STOC '08) showed that deterministic (or 1-sided error randomized) strong compression for OR-SAT is not possible unless NP C coNP/poly; the case of AND-SAT remained mysterious. We give new and improved evidence against strong compression schemes for both OR-SAT and AND-SAT; our method applies to probabilistic compression schemes with 2-sided error. We also give versions of these results for an analogous task of quantum instance compression, in which a polynomial-time quantum reduction must output a quantum state that, in an appropriate sense, "preserves the answer" to the input instance. We give quantitatively similar evidence against strong compression for AND- and OR-SAT in this setting, albeit under less well-studied hypotheses about the relationship between NP and quantum complexity classes. To prove all of these results, we exploit the information bottleneck of an instance compression scheme, using a new method to "disguise" information being fed into a compressive mapping.by Andrew Donald Drucker.Ph.D

    NP-hardness of circuit minimization for multi-output functions

    Get PDF
    Can we design efficient algorithms for finding fast algorithms? This question is captured by various circuit minimization problems, and algorithms for the corresponding tasks have significant practical applications. Following the work of Cook and Levin in the early 1970s, a central question is whether minimizing the circuit size of an explicitly given function is NP-complete. While this is known to hold in restricted models such as DNFs, making progress with respect to more expressive classes of circuits has been elusive. In this work, we establish the first NP-hardness result for circuit minimization of total functions in the setting of general (unrestricted) Boolean circuits. More precisely, we show that computing the minimum circuit size of a given multi-output Boolean function f : {0,1}^n ? {0,1}^m is NP-hard under many-one polynomial-time randomized reductions. Our argument builds on a simpler NP-hardness proof for the circuit minimization problem for (single-output) Boolean functions under an extended set of generators. Complementing these results, we investigate the computational hardness of minimizing communication. We establish that several variants of this problem are NP-hard under deterministic reductions. In particular, unless ? = ??, no polynomial-time computable function can approximate the deterministic two-party communication complexity of a partial Boolean function up to a polynomial. This has consequences for the class of structural results that one might hope to show about the communication complexity of partial functions

    Secret Sharing Schemes Based on Error-Correcting Codes

    Get PDF
    In this thesis we present a new secret sharing scheme based on binary error-correcting codes, which can realize arbitrary (monotone or non-monotone) access structures. In this secret sharing scheme the secret is a codeword in a binary error-correcting code and the shares are binary words of the same length. When a group of participants wants to reconstruct the secret, the participants calculate the sum of their shares and apply Hamming decoding to that sum. The shares have the property that, when the group is authorized, the secret is the codeword which is closest to the sum of the shares. Otherwise, the sum differs strongly enough from the secret such that Hamming decoding yields another codeword. The shares can be described by the solutions of a system of linear equations which is closely related to first order Reed-Muller codes. We consider the case that there are only two different Hamming distances from the sums of the shares to the secret: one small distance k for the authorized sets and one large distance g for unauthorized sets. For this case a method of how to find suitable shares for arbitrary access structures is presented. In the resulting secret sharing scheme large code lengths are needed and the security distance g is rather small. In order to find classes of access structures which have more efficient and secure realizations, we classify the access structures such that all access structures of one class allow the same parameters g and k. Furthermore we study several changes in the access structure and their impact on the possible realizations. This gives rise to special classes of access structures defined by veto sets and necessary sets, which are particularly suitable for our approach

    Secure Schemes for Semi-Trusted Environment

    Get PDF
    In recent years, two distributed system technologies have emerged: Peer-to-Peer (P2P) and cloud computing. For the former, the computers at the edge of networks share their resources, i.e., computing power, data, and network bandwidth, and obtain resources from other peers in the same community. Although this technology enables efficiency, scalability, and availability at low cost of ownership and maintenance, peers defined as ``like each other'' are not wholly controlled by one another or by the same authority. In addition, resources and functionality in P2P systems depend on peer contribution, i.e., storing, computing, routing, etc. These specific aspects raise security concerns and attacks that many researchers try to address. Most solutions proposed by researchers rely on public-key certificates from an external Certificate Authority (CA) or a centralized Public Key Infrastructure (PKI). However, both CA and PKI are contradictory to fully decentralized P2P systems that are self-organizing and infrastructureless. To avoid this contradiction, this thesis concerns the provisioning of public-key certificates in P2P communities, which is a crucial foundation for securing P2P functionalities and applications. We create a framework, named the Self-Organizing and Self-Healing CA group (SOHCG), that can provide certificates without a centralized Trusted Third Party (TTP). In our framework, a CA group is initialized in a Content Addressable Network (CAN) by trusted bootstrap nodes and then grows to a mature state by itself. Based on our group management policies and predefined parameters, the membership in a CA group is dynamic and has a uniform distribution over the P2P community; the size of a CA group is kept to a level that balances performance and acceptable security. The muticast group over an underlying CA group is constructed to reduce communication and computation overhead from collaboration among CA members. To maintain the quality of the CA group, the honest majority of members is maintained by a Byzantine agreement algorithm, and all shares are refreshed gradually and continuously. Our CA framework has been designed to meet all design goals, being self-organizing, self-healing, scalable, resilient, and efficient. A security analysis shows that the framework enables key registration and certificate issue with resistance to external attacks, i.e., node impersonation, man-in-the-middle (MITM), Sybil, and a specific form of DoS, as well as internal attacks, i.e., CA functionality interference and CA group subversion. Cloud computing is the most recent evolution of distributed systems that enable shared resources like P2P systems. Unlike P2P systems, cloud entities are asymmetric in roles like client-server models, i.e., end-users collaborate with Cloud Service Providers (CSPs) through Web interfaces or Web portals. Cloud computing is a combination of technologies, e.g., SOA services, virtualization, grid computing, clustering, P2P overlay networks, management automation, and the Internet, etc. With these technologies, cloud computing can deliver services with specific properties: on-demand self-service, broad network access, resource pooling, rapid elasticity, measured services. However, theses core technologies have their own intrinsic vulnerabilities, so they induce specific attacks to cloud computing. Furthermore, since public clouds are a form of outsourcing, the security of users' resources must rely on CSPs' administration. This situation raises two crucial security concerns for users: locking data into a single CSP and losing control of resources. Providing inter-operations between Application Service Providers (ASPs) and untrusted cloud storage is a countermeasure that can protect users from lock-in with a vendor and losing control of their data. To meet the above challenge, this thesis proposed a new authorization scheme, named OAuth and ABE based authorization (AAuth), that is built on the OAuth standard and leverages Ciphertext-Policy Attribute Based Encryption (CP-ABE) and ElGamal-like masks to construct ABE-based tokens. The ABE-tokens can facilitate a user-centric approach, end-to-end encryption and end-to-end authorization in semi-trusted clouds. With these facilities, owners can take control of their data resting in semi-untrusted clouds and safely use services from unknown ASPs. To this end, our scheme divides the attribute universe into two disjointed sets: confined attributes defined by owners to limit the lifetime and scope of tokens and descriptive attributes defined by authority(s) to certify the characteristic of ASPs. Security analysis shows that AAuth maintains the same security level as the original CP-ABE scheme and protects users from exposing their credentials to ASP, as OAuth does. Moreover, AAuth can resist both external and internal attacks, including untrusted cloud storage. Since most cryptographic functions are delegated from owners to CSPs, AAuth gains computing power from clouds. In our extensive simulation, AAuth's greater overhead was balanced by greater security than OAuth's. Furthermore, our scheme works seamlessly with storage providers by retaining the providers' APIs in the usual way

    Classical Computation in the Quantum World

    Full text link
    Quantum computation is by far the most powerful computational model allowed by the laws of physics. By carefully manipulating microscopic systems governed by quantum mechanics, one can efficiently solve computational problems that may be classically intractable; on the contrary, such speed-ups are rarely possible without the help of classical computation, since most quantum algorithms heavily rely on subroutines that are purely classical. A better understanding of the relationship between classical and quantum computation is indispensable, in particular in an era where the first quantum device exceeding classical computational power is within reach. In the first part of the thesis, we study some differences between classical and quantum computation. We first show that quantum cryptographic hashing is maximally resilient against classical leakage, a property beyond reach for any classical hash function. Next, we consider the limitation of strong (amplitude-wise) simulation of quantum computation. We prove an unconditional and explicit complexity lower bound for a category of simulations called monotone strong simulation and further prove conditional complexity lower bounds for general strong simulation techniques. Both results indicate that strong simulation is fundamentally unscalable. In the second part of the thesis, we propose classical algorithms that facilitate quantum computing. We propose a new classical algorithm for the synthesis of a quantum algorithm paradigm called quantum signal processing. Empirically, our algorithm demonstrates numerical stability and acceleration of more than one magnitude compared to state-of-the-art algorithms. Finally, we propose a randomized algorithm for transversally switching between arbitrary stabilizer quantum error-correcting codes. It has the property of preserving the code distance and thus might prove useful for designing fault-tolerant code-switching schemes.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/149943/1/cupjinh_1.pd

    Les preuves de protocoles cryprographiques revisitées

    Get PDF
    With the rise of the Internet the use of cryptographic protocols became ubiquitous. Considering the criticality and complexity of these protocols, there is an important need of formal verification.In order to obtain formal proofs of cryptographic protocols, two main attacker models exist: the symbolic model and the computational model. The symbolic model defines the attacker capabilities as a fixed set of rules. On the other hand, the computational model describes only the attacker's limitations by stating that it may break some hard problems. While the former is quiteabstract and convenient for automating proofs the later offers much stronger guarantees.There is a gap between the guarantees offered by these two models due to the fact the symbolic model defines what the adversary may do while the computational model describes what it may not do. In 2012 Bana and Comon devised a new symbolic model in which the attacker's limitations are axiomatised. In addition provided that the (computational semantics) of the axioms follows from the cryptographic hypotheses, proving security in this symbolic model yields security in the computational model.The possibility of automating proofs in this model (and finding axioms general enough to prove a large class of protocols) was left open in the original paper. In this thesis we provide with an efficient decision procedure for a general class of axioms. In addition we propose a tool (SCARY) implementing this decision procedure. Experimental results of our tool shows that the axioms we designed for modelling security of encryption are general enough to prove a large class of protocols.Avec la gĂ©nĂ©ralisation d'Internet, l'usage des protocoles cryptographiques est devenu omniprĂ©sent. Étant donnĂ© leur complexitĂ© et leur l'aspect critique, une vĂ©rification formelle des protocoles cryptographiques est nĂ©cessaire.Deux principaux modĂšles existent pour prouver les protocoles. Le modĂšle symbolique dĂ©finit les capacitĂ©s de l'attaquant comme un ensemble fixe de rĂšgles, tandis que le modĂšle calculatoire interdit seulement a l'attaquant derĂ©soudre certain problĂšmes difficiles. Le modĂšle symbolique est trĂšs abstrait et permet gĂ©nĂ©ralement d'automatiser les preuves, tandis que le modĂšle calculatoire fournit des garanties plus fortes.Le fossĂ© entre les garanties offertes par ces deux modĂšles est dĂ» au fait que le modĂšle symbolique dĂ©crit les capacitĂ©s de l'adversaire alors que le modĂšle calculatoire dĂ©crit ses limitations. En 2012 Bana et Comon ont proposĂ© unnouveau modĂšle symbolique dans lequel les limitations de l'attaquant sont axiomatisĂ©es. De plus, si la sĂ©mantique calculatoire des axiomes dĂ©coule des hypothĂšses cryptographiques, la sĂ©curitĂ© dans ce modĂšle symbolique fournit desgaranties calculatoires.L'automatisation des preuves dans ce nouveau modĂšle (et l'Ă©laboration d'axiomes suffisamment gĂ©nĂ©raux pour prouver un grand nombre de protocoles) est une question laissĂ©e ouverte par l'article de Bana et Comon. Dans cette thĂšse nous proposons une procĂ©dure de dĂ©cision efficace pour une large classe d'axiomes. De plus nous avons implĂ©mentĂ© cette procĂ©dure dans un outil (SCARY). Nos rĂ©sultats expĂ©rimentaux montrent que nos axiomes modĂ©lisant la sĂ©curitĂ© du chiffrement sont suffisamment gĂ©nĂ©raux pour prouver une large classe de protocoles

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Automated Deduction – CADE 28

    Get PDF
    This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions

    Hazard-free clock synchronization

    Get PDF
    The growing complexity of microprocessors makes it infeasible to distribute a single clock source over the whole processor with a small clock skew. Hence, chips are split into multiple clock regions, each covered by a single clock source. This poses a problem for communication between these clock regions. Clock synchronization algorithms promise an advantage over state-of-the-art solutions, such as GALS systems. When clock regions are synchronous the communication latency improves significantly over handshake-based solutions. We focus on the implementation of clock synchronization algorithms. A major obstacle when implementing circuits on clock domain crossings are hazardous signals. We can formally define hazards by extending the Boolean logic by a third value u. In this thesis, we describe a theory for designing and analyzing hazard-free circuits. We develop strategies for hazard-free encoding and construction of hazard-free circuits from finite state machines. Furthermore, we discuss clock synchronization algorithms and a possible combination of them. In the end, we present two implementations of the GCS algorithm by Lenzen, Locher, and Wattenhofer (JACM 2010). We prove by rigorous analysis that the systems implement the algorithm. The theory described above is used to prove that our clock synchronization circuits are hazard-free (in the sense that they compute the most precise output possible). Simulation of our GCS system shows that it achieves a skew between neighboring clock regions that is smaller than a few inverter delays.Aufgrund der zunehmenden KomplexitĂ€t von Mikroprozessoren ist es unmöglich, mit einer einzigen Taktquelle den gesamten Prozessor ohne großen Versatz zu takten. Daher werden Chips in mehrere Regionen aufgeteilt, die jeweils von einer einzelnen Taktquelle abgedeckt werden. Dies stellt ein Problem fĂŒr die Kommunikation zwischen diesen Taktregionen dar. Algorithmen zur Taktsynchronisation bieten einen Vorteil gegenĂŒber aktuellen Lösungen, wie z.B. GALS-Systemen. Synchronisiert man die Taktregionen, so verbessert sich die Latenz der Kommunikation erheblich. In Schaltkreisen zwischen zwei Taktregionen können undefinierte Signale, sogenannte Hazards auftreten. Indem wir die boolesche Algebra um einen dritten Wert u erweitern, können wir diese Hazards formal definieren. In dieser Arbeit zeigen wir eine Methode zum Entwurf und zur Analyse von hazard-freien Schaltungen. Wir entwickeln Strategien fĂŒr Kodierungen die Hazards vermeiden und zur Konstruktion von hazard-freien Schaltungen. DarĂŒber hinaus stellen wir Algorithmen Taktsynchronisation vor und wie diese kombiniert werden können. Zum Schluss stellen wir zwei Implementierungen des GCS-Algorithmus von Lenzen, Locher und Wattenhofer (JACM 2010) vor. Oben genannte Mechanismen werden verwendet, um formal zu beweisen, dass diese Implementierungen korrekt sind. Die Implementierung hat keine Hazards, das heißt sie berechnet die bestmo ̈gliche Ausgabe. Anschließende Simulation der GCS Implementierung erzielt einen Versatz zwischen benachbarten Taktregionen, der kleiner als ein paar Gatter-Laufzeiten ist
    corecore