1,474 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Enumerating Regular Languages with Bounded Delay

    Get PDF

    Folding interpretations

    Full text link
    We study the polyregular string-to-string functions, which are certain functions of polynomial output size that can be described using automata and logic. We describe a system of combinators that generates exactly these functions. Unlike previous systems, the present system includes an iteration mechanism, namely fold. Although unrestricted fold can define all primitive recursive functions, we identify a type system (inspired by linear logic) that restricts fold so that it defines exactly the polyregular functions. We also present related systems, for quantifier-free functions as well as for linear regular functions on both strings and trees.Comment: Author's version of a LICS 23 pape

    Current and Future Challenges in Knowledge Representation and Reasoning

    Full text link
    Knowledge Representation and Reasoning is a central, longstanding, and active area of Artificial Intelligence. Over the years it has evolved significantly; more recently it has been challenged and complemented by research in areas such as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl Perspectives workshop was held on Knowledge Representation and Reasoning. The goal of the workshop was to describe the state of the art in the field, including its relation with other areas, its shortcomings and strengths, together with recommendations for future progress. We developed this manifesto based on the presentations, panels, working groups, and discussions that took place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge Representation: its origins, goals, milestones, and current foci; its relation to other disciplines, especially to Artificial Intelligence; and on its challenges, along with key priorities for the next decade

    Bounded Relativization

    Get PDF
    Relativization is one of the most fundamental concepts in complexity theory, which explains the difficulty of resolving major open problems. In this paper, we propose a weaker notion of relativization called bounded relativization. For a complexity class ?, we say that a statement is ?-relativizing if the statement holds relative to every oracle ? ? ?. It is easy to see that every result that relativizes also ?-relativizes for every complexity class ?. On the other hand, we observe that many non-relativizing results, such as IP = PSPACE, are in fact PSPACE-relativizing. First, we use the idea of bounded relativization to obtain new lower bound results, including the following nearly maximum circuit lower bound: for every constant ? > 0, BPE^{MCSP}/2^{?n} ? SIZE[2?/n]. We prove this by PSPACE-relativizing the recent pseudodeterministic pseudorandom generator by Lu, Oliveira, and Santhanam (STOC 2021). Next, we study the limitations of PSPACE-relativizing proof techniques, and show that a seemingly minor improvement over the known results using PSPACE-relativizing techniques would imply a breakthrough separation NP ? L. For example: - Impagliazzo and Wigderson (JCSS 2001) proved that if EXP ? BPP, then BPP admits infinitely-often subexponential-time heuristic derandomization. We show that their result is PSPACE-relativizing, and that improving it to worst-case derandomization using PSPACE-relativizing techniques implies NP ? L. - Oliveira and Santhanam (STOC 2017) recently proved that every dense subset in P admits an infinitely-often subexponential-time pseudodeterministic construction, which we observe is PSPACE-relativizing. Improving this to almost-everywhere (pseudodeterministic) or (infinitely-often) deterministic constructions by PSPACE-relativizing techniques implies NP ? L. - Santhanam (SICOMP 2009) proved that pr-MA does not have fixed polynomial-size circuits. This lower bound can be shown PSPACE-relativizing, and we show that improving it to an almost-everywhere lower bound using PSPACE-relativizing techniques implies NP ? L. In fact, we show that if we can use PSPACE-relativizing techniques to obtain the above-mentioned improvements, then PSPACE ? EXPH. We obtain our barrier results by constructing suitable oracles computable in EXPH relative to which these improvements are impossible

    Multirole Logic and Multiparty Channels

    Full text link
    We identify multirole logic as a new form of logic in which conjunction/disjunction is interpreted as an ultrafilter on some underlying set of roles and the notion of negation is generalized to endomorphisms on this set. We formulate both multirole logic (MRL) and linear multirole logic (LMRL) as natural generalizations of classical logic (CL) and classical linear logic (CLL), respectively. Among various meta-properties established for MRL and LMRL, we obtain one named multiparty cut-elimination stating that every cut involving one or more sequents (as a generalization of a binary cut involving exactly two sequents) can be eliminated, thus extending the celebrated result of cut-elimination by Gentzen. As a side note, we also give an ultrafilter-based interpretation for intuitionism, formulating MRLJ as a natural generalization of intuitionistic logic (IL). An immediate application of LMRL can be found in a formulation of session types for channels that support multiparty communication in distributed programming. We present a multi-threaded lambda-calculus (MTLC) where threads communicate on linearly typed multiparty channels that are directly rooted in LMRL, establishing for MTLC both type preservation and global progress. The primary contribution of the paper consists of both identifying multirole logic as a new form of logic and establishing a theoretical foundation for it, and the secondary contribution lies in applying multirole logic to the practical domain of distributed programming.Comment: arXiv admin note: text overlap with arXiv:1604.0302

    Constant-Delay Enumeration for SLP-Compressed Documents

    Get PDF

    Towards compact bandwidth and efficient privacy-preserving computation

    Get PDF
    In traditional cryptographic applications, cryptographic mechanisms are employed to ensure the security and integrity of communication or storage. In these scenarios, the primary threat is usually an external adversary trying to intercept or tamper with the communication between two parties. On the other hand, in the context of privacy-preserving computation or secure computation, the cryptographic techniques are developed with a different goal in mind: to protect the privacy of the participants involved in a computation from each other. Specifically, privacy-preserving computation allows multiple parties to jointly compute a function without revealing their inputs and it has numerous applications in various fields, including finance, healthcare, and data analysis. It allows for collaboration and data sharing without compromising the privacy of sensitive data, which is becoming increasingly important in today's digital age. While privacy-preserving computation has gained significant attention in recent times due to its strong security and numerous potential applications, its efficiency remains its Achilles' heel. Privacy-preserving protocols require significantly higher computational overhead and bandwidth when compared to baseline (i.e., insecure) protocols. Therefore, finding ways to minimize the overhead, whether it be in terms of computation or communication, asymptotically or concretely, while maintaining security in a reasonable manner remains an exciting problem to work on. This thesis is centred around enhancing efficiency and reducing the costs of communication and computation for commonly used privacy-preserving primitives, including private set intersection, oblivious transfer, and stealth signatures. Our primary focus is on optimizing the performance of these primitives.Im Gegensatz zu traditionellen kryptografischen Aufgaben, bei denen Kryptografie verwendet wird, um die Sicherheit und Integrität von Kommunikation oder Speicherung zu gewährleisten und der Gegner typischerweise ein Außenstehender ist, der versucht, die Kommunikation zwischen Sender und Empfänger abzuhören, ist die Kryptografie, die in der datenschutzbewahrenden Berechnung (oder sicheren Berechnung) verwendet wird, darauf ausgelegt, die Privatsphäre der Teilnehmer voreinander zu schützen. Insbesondere ermöglicht die datenschutzbewahrende Berechnung es mehreren Parteien, gemeinsam eine Funktion zu berechnen, ohne ihre Eingaben zu offenbaren. Sie findet zahlreiche Anwendungen in verschiedenen Bereichen, einschließlich Finanzen, Gesundheitswesen und Datenanalyse. Sie ermöglicht eine Zusammenarbeit und Datenaustausch, ohne die Privatsphäre sensibler Daten zu kompromittieren, was in der heutigen digitalen Ära immer wichtiger wird. Obwohl datenschutzbewahrende Berechnung aufgrund ihrer starken Sicherheit und zahlreichen potenziellen Anwendungen in jüngster Zeit erhebliche Aufmerksamkeit erregt hat, bleibt ihre Effizienz ihre Achillesferse. Datenschutzbewahrende Protokolle erfordern deutlich höhere Rechenkosten und Kommunikationsbandbreite im Vergleich zu Baseline-Protokollen (d.h. unsicheren Protokollen). Daher bleibt es eine spannende Aufgabe, Möglichkeiten zu finden, um den Overhead zu minimieren (sei es in Bezug auf Rechen- oder Kommunikationsleistung, asymptotisch oder konkret), während die Sicherheit auf eine angemessene Weise gewährleistet bleibt. Diese Arbeit konzentriert sich auf die Verbesserung der Effizienz und Reduzierung der Kosten für Kommunikation und Berechnung für gängige datenschutzbewahrende Primitiven, einschließlich private Schnittmenge, vergesslicher Transfer und Stealth-Signaturen. Unser Hauptaugenmerk liegt auf der Optimierung der Leistung dieser Primitiven

    An Infinite Needle in a Finite Haystack: Finding Infinite Counter-Models in Deductive Verification

    Full text link
    First-order logic, and quantifiers in particular, are widely used in deductive verification. Quantifiers are essential for describing systems with unbounded domains, but prove difficult for automated solvers. Significant effort has been dedicated to finding quantifier instantiations that establish unsatisfiability, thus ensuring validity of a system's verification conditions. However, in many cases the formulas are satisfiable: this is often the case in intermediate steps of the verification process. For such cases, existing tools are limited to finding finite models as counterexamples. Yet, some quantified formulas are satisfiable but only have infinite models. Such infinite counter-models are especially typical when first-order logic is used to approximate inductive definitions such as linked lists or the natural numbers. The inability of solvers to find infinite models makes them diverge in these cases. In this paper, we tackle the problem of finding such infinite models. These models allow the user to identify and fix bugs in the modeling of the system and its properties. Our approach consists of three parts. First, we introduce symbolic structures as a way to represent certain infinite models. Second, we describe an effective model finding procedure that symbolically explores a given family of symbolic structures. Finally, we identify a new decidable fragment of first-order logic that extends and subsumes the many-sorted variant of EPR, where satisfiable formulas always have a model representable by a symbolic structure within a known family. We evaluate our approach on examples from the domains of distributed consensus protocols and of heap-manipulating programs. Our implementation quickly finds infinite counter-models that demonstrate the source of verification failures in a simple way, while SMT solvers and theorem provers such as Z3, cvc5, and Vampire diverge

    Revisiting the growth of polyregular functions: output languages, weighted automata and unary inputs

    Full text link
    Polyregular functions are the class of string-to-string functions definable by pebble transducers (an extension of finite automata) or equivalently by MSO interpretations (a logical formalism). Their output length is bounded by a polynomial in the input length: a function computed by a kk-pebble transducer or by a kk-dimensional MSO interpretation has growth rate O(nk)O(n^k). Boja\'nczyk has recently shown that the converse holds for MSO interpretations, but not for pebble transducers. We give significantly simplified proofs of those two results, extending the former to first-order interpretations by reduction to an elementary property of N\mathbb{N}-weighted automata. For any kk, we also prove the stronger statement that there is some quadratic polyregular function whose output language differs from that of any kk-fold composition of macro tree transducers (and which therefore cannot be computed by any kk-pebble transducer). In the special case of unary input alphabets, we show that kk pebbles suffice to compute polyregular functions of growth O(nk)O(n^k). This is obtained as a corollary of a basis of simple word sequences whose ultimately periodic combinations generate all polyregular functions with unary input. Finally, we study polyregular and polyblind functions between unary alphabets (i.e. integer sequences), as well as their first-order subclasses.Comment: 27 pages, not submitted ye
    • …
    corecore