40 research outputs found

    Algebraic and Combinatorial Methods in Computational Complexity

    Get PDF
    Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. The algebraic theme continues in some of the most exciting recent progress in computational complexity. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Another surprising connection is that the algebraic techniques invented to show lower bounds now prove useful to develop efficient algorithms. For example, Williams showed how to use the polynomial method to obtain faster all-pair-shortest-path algorithms. This emphases once again the central role of algebra in computer science. The seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and this seminar can play an important role in educating a diverse community about the latest new techniques, spurring further progress

    An AGI with Time-Inconsistent Preferences

    Get PDF
    This paper reveals a trap for artificial general intelligence (AGI) theorists who use economists’ standard method of discounting. This trap is implicitly and falsely assuming that a rational AGI would have time consistent preferences. An agent with time-inconsistent preferences knows that its future self will disagree with its current self concerning intertemporal decision making. Such an agent cannot automatically trust its future self to carry out plans that its current self considers optimal

    An AGI with Time-Inconsistent Preferences

    Get PDF
    This paper reveals a trap for artificial general intelligence (AGI) theorists who use economists' standard method of discounting. This trap is implicitly and falsely assuming that a rational AGI would have time-consistent preferences. An agent with time-inconsistent preferences knows that its future self will disagree with its current self concerning intertemporal decision making. Such an agent cannot automatically trust its future self to carry out plans that its current self considers optimal

    Publicly Detectable Watermarking for Language Models

    Full text link
    We construct the first provable watermarking scheme for language models with public detectability or verifiability: we use a private key for watermarking and a public key for watermark detection. Our protocol is the first watermarking scheme that does not embed a statistical signal in generated text. Rather, we directly embed a publicly-verifiable cryptographic signature using a form of rejection sampling. We show that our construction meets strong formal security guarantees and preserves many desirable properties found in schemes in the private-key watermarking setting. In particular, our watermarking scheme retains distortion-freeness and model agnosticity. We implement our scheme and make empirical measurements over open models in the 7B parameter range. Our experiments suggest that our watermarking scheme meets our formal claims while preserving text quality

    The Journey from NP to TFNP Hardness

    Get PDF
    The class TFNP is the search analog of NP with the additional guarantee that any instance has a solution. TFNP has attracted extensive attention due to its natural syntactic subclasses that capture the computational complexity of important search problems from algorithmic game theory, combinatorial optimization and computational topology. Thus, one of the main research objectives in the context of TFNP is to search for efficient algorithms for its subclasses, and at the same time proving hardness results where efficient algorithms cannot exist. Currently, no problem in TFNP is known to be hard under assumptions such as NP hardness, the existence of one-way functions, or even public-key cryptography. The only known hardness results are based on less general assumptions such as the existence of collision-resistant hash functions, one-way permutations less established cryptographic primitives (e.g. program obfuscation or functional encryption). Several works explained this status by showing various barriers to proving hardness of TFNP. In particular, it has been shown that hardness of TFNP hardness cannot be based on worst-case NP hardness, unless NP=coNP. Therefore, we ask the following question: What is the weakest assumption sufficient for showing hardness in TFNP? In this work, we answer this question and show that hard-on-average TFNP problems can be based on the weak assumption that there exists a hard-on-average language in NP. In particular, this includes the assumption of the existence of one-way functions. In terms of techniques, we show an interesting interplay between problems in TFNP, derandomization techniques, and zero-knowledge proofs

    Proof-of-Stake for SpartanGold

    Get PDF
    Consensus protocols are critical for any blockchain technology, and Proof-of- Stake (PoS) protocols have gained popularity due to their advantages over Proof-of- Work (PoW) protocols in terms of scalability and efficiency. However, existing PoS mechanisms, such as delegated and bonded PoS, suffer from security and usability issues. Pure PoS (PPoS) protocols provide a stronger decentralization and offer a potential solution to these problems. Algorand, a well-known cryptocurrency, employs a PPoS protocol that utilizes a new Byzantine Agreement (BA) mechanism for consensus and Verifiable Random Functions (VRFs) to securely scale the protocol to accommodate many participants, making it possible to handle a growing number of clients with ease. In this research, we explore, implement, and document all the essential steps of the algorithm for any given round that leads to publishing a block, and we evaluate the performance and stability of Algorand using various numbers of users, their stakes, and network settings. To simulate the protocol, we extend the Spar- tanGold blockchain framework, which currently uses a PoW protocol, and convert it into a PoS model. Our results show that the PPoS protocol developed by Algorand is highly scalable, achieving consensus quickly and efficiently, even in the presence of malicious users or network partitions and offers higher security and Byzantine fault tolerance compared to traditional PoW and other PoS-based protocols

    Bet-or-Pass: Adversarially Robust Bloom Filters

    Get PDF
    A Bloom filter is a data structure that maintains a succinct and probabilistic representation of a set S⊆US\subseteq U of elements from a universe UU. It supports approximate membership queries. The price of the succinctness is allowing some error, namely false positives: for any x∉Sx\notin S, it might answer `Yes\u27 but with a small (non-negligible) probability. When dealing with such data structures in adversarial settings, we need to define the correctness guarantee and formalize the requirement that bad events happen infrequently and those false positives are appropriately distributed. Recently, several papers investigated this topic, suggesting different robustness definitions. In this work we unify this line of research and propose several robustness notions for Bloom filters that allow the adaptivity of queries. The goal is that a robust Bloom filter should behave like a random biased coin even against an adaptive adversary. The robustness definitions are expressed by the type of test that the Bloom filter should withstand. We explore the relationships between these notions and highlight the notion of Bet-or-Pass as capturing the desired properties of such a data structure

    Encapsulated Search Index: Public-Key, Sub-linear, Distributed, and Delegatable

    Get PDF
    We build the first sub-linear (in fact, potentially constant-time) public-key searchable encryption system: − server can publish a public key PKPK. − anybody can build an encrypted index for document DD under PKPK. − client holding the index can obtain a token zwz_w from the server to check if a keyword ww belongs to DD. − search using zwz_w is almost as fast (e.g., sub-linear) as the non-private search. − server granting the token does not learn anything about the document DD, beyond the keyword ww. − yet, the token zwz_w is specific to the pair (D,w)(D, w): the client does not learn if other keywords w2˘7≠ww\u27\neq w belong to DD, or if w belongs to other, freshly indexed documents D2˘7D\u27. − server cannot fool the client by giving a wrong token zwz_w. We call such a primitive Encapsulated Search Index (ESI). Our ESI scheme can be made (t,n)(t, n)- distributed among nn servers in the best possible way: non-interactive, verifiable, and resilient to any coalition of up to (t−1)(t − 1) malicious servers. We also introduce the notion of delegatable ESI and show how to extend our construction to this setting. Our solution — including public indexing, sub-linear search, delegation, and distributed token generation — is deployed as a commercial application by Atakama
    corecore