Schloss Dagstuhl – Leibniz Center for Informatics
DROPS Dagstuhl Research Online Publication ServerNot a member yet
21693 research outputs found
Sort by
Computation First: Rebuilding Constructivism with Effects (Invited Talk)
Constructive logic and type theory have traditionally been grounded in pure, effect-free model of computation. This paper argues that such a restriction is not a foundational necessity but a historical artifact, and it advocates for a broader perspective of effectful constructivism, where computational effects, such as state, non-determinism, and exceptions, are directly and internally embedded in the logical and computational foundations. We begin by surveying examples where effects reshape logical principles, and then outline three approaches to effectful constructivism, focusing on realizability models: Monadic Combinatory Algebras, which extend classical partial combinatory algebras with effectful computation; Evidenced Frames, a flexible semantic structure capable of uniformly capturing a wide range of effects; and Effectful Higher-Order Logic (EffHOL), a syntactic approach that directly translates logical propositions into specifications for effectful programs. We further illustrate how concrete type theories can internalize effects, via the family of type theories TT^□_C. Together, these works demonstrate that effectful constructivism is not merely possible but a natural and robust extension of traditional frameworks
Unsolvable Terms in Filter Models (Invited Talk)
Intersection type theories (itt’s) and filter models, i.e. λ-calculus models generated by itt’s, are reviewed in full generality. In this framework, which subsumes most λ-calculus models in the literature based on Scott-continuous functions, we discuss the interpretation of unsolvable terms. We give a necessary, but not sufficient, condition on an itt for the interpretation of some unsolvable term to be non-trivial in the filter model it generates. This result is obtained building on a type theoretic characterisation of the fine structure of unsolvables
Repetition Aware Text Indexing for Matching Patterns with Wildcards
We study the problem of indexing a text T[1..n] to support pattern matching with wildcards. The input of a query is a pattern P[1..m] containing h ∈ [0, k] wildcard (a.k.a. don't care) characters and the output is the set of occurrences of P in T (i.e., starting positions of substrings of T that matches P), where k = o(log n) is fixed at index construction. A classic solution by Cole et al. [STOC 2004] provides an index with space complexity O(n ⋅ (clog n)^k/k!)) and query time O(m+2^h log log n+occ), where c > 1 is a constant, and occ denotes the number of occurrences of P in T. We introduce a new data structure that significantly reduces space usage for highly repetitive texts while maintaining efficient query processing. Its space (in words) and query time are as follows:
O(δ log (n/δ)⋅ c^k (1+(log^k (δ log n))/k!)) and O((m+2^h +occ)log n))
The parameter δ, known as substring complexity, is a recently introduced measure of repetitiveness that serves as a unifying and lower-bounding metric for several popular measures, including the number of phrases in the LZ77 factorization (denoted by z) and the number of runs in the Burrows-Wheeler Transform (denoted by r). Moreover, O(δ log (n/δ)) represents the optimal space required to encode the data in terms of n and δ, helping us see how close our space is to the minimum required. In another trade-off, we match the query time of Cole et al.’s index using O(n+δ log (n/δ) ⋅ (clogδ)^{k+ε}/k!) space, where ε > 0 is an arbitrarily small constant. We also demonstrate how these techniques can be applied to a more general indexing problem, where the query pattern includes k-gaps (a gap can be interpreted as a contiguous sequence of wildcard characters)
Fully Dynamic Algorithms for Transitive Reduction
Given a directed graph G, a transitive reduction G^t of G (first studied by Aho, Garey, Ullman [SICOMP `72]) is a minimal subgraph of G that preserves the reachability relation between every two vertices in G.
In this paper, we study the computational complexity of transitive reduction in the dynamic setting. We obtain the first fully dynamic algorithms for maintaining a transitive reduction of a general directed graph undergoing updates such as edge insertions or deletions. Our first algorithm achieves O(m+n log n) amortized update time, which is near-optimal for sparse directed graphs, and can even support extended update operations such as inserting a set of edges all incident to the same vertex, or deleting an arbitrary set of edges. Our second algorithm relies on fast matrix multiplication and achieves O(m+ n^{1.585}) worst-case update time
The Hardness of Decision Tree Complexity
Let f be a Boolean function given as either a truth table or a circuit. How difficult is it to find the decision tree complexity, also known as deterministic query complexity, of f in both cases? We prove that this problem is NC¹-hard and PSPACE-hard, respectively. The second bound is tight, and the first bound is close to being tight
Bringing the Power of Interactive Theorem Proving to Web3 (Invited Talk)
The security of the web3 ecosystem relies on the correctness of implementations of advanced mathematical formalisms, such as those underpinning various DeFi products or zk proof systems. The complexity of these formalisms, however, makes automated reasoning about said correctness challenging, if not intractable. In this talk, we give an overview of how some of these challenges can be tackled successfully in the world of interactive theorem proving. In particular, we focus on what it means to build an infrastructure for scalable reasoning about correctness of zk circuits, cryptographic algorithms, protocol models, and EVM/Yul-based smart contracts in the Lean proof assistant. Along the way, we give examples of how Nethermind was able to leverage such infrastructure in a number of real-world engagements
Branch Sequentialization in Quantum Polytime
Quantum algorithms leverage the use of quantumly-controlled data in order to achieve computational advantage. This implies that the programs use constructs depending on quantum data and not just classical data such as measurement outcomes. Current compilation strategies for quantum control flow involve compiling the branches of a quantum conditional, either in-depth or in-width, which in general leads to circuits of exponential size. This problem is coined as the branch sequentialization problem. We introduce and study a compilation technique for avoiding branch sequentialization on a language that is sound and complete for quantum polynomial time, thus, improving on existing polynomial-size-preserving compilation techniques
Continuous Map Matching to Paths Under Travel Time Constraints
In this paper, we study the problem of map matching with travel time constraints. Given a sequence of k spatio-temporal measurements and an embedded path graph with travel time costs, the goal is to snap each measurement to a close-by location in the graph, such that consecutive locations can be reached from one another along the path within the timestamp difference of the respective measurements. This problem arises in public transit data processing as well as in map matching of movement trajectories to general graphs. We show that the classical approach for this problem, which relies on selecting a finite set of candidate locations in the graph for each measurement, cannot guarantee to find a consistent solution. We propose a new algorithm that can deal with an infinite set of candidate locations per measurement. We prove that our algorithm always detects a consistent map matching path (if one exists). Despite the enlarged candidate set, we also demonstrate that our algorithm has superior running time in theory and practice. For a path graph with n nodes, we show that our algorithm runs in (k² n log {nk}) and under mild assumptions in (k n ^λ + n log³ n) for λ ≈ 0.695. This is a significant improvement over the baseline, which runs in (k n²) and which might not even identify a correct solution. The performance of our algorithm hinges on an efficient segment-circle intersection data structure. We describe how to design and implement such a data structure for our application. In the experimental evaluation, we demonstrate the usefulness of our novel algorithm on a diverse set of generated measurements as well as GTFS data
Submodular Hypergraph Partitioning: Metric Relaxations and Fast Algorithms via an Improved Cut-Matching Game
Despite there being significant work on developing spectral- [Chan et al., 2018; Lau et al., 2023; Kwok et al., 2022], and metric-embedding-based [Louis and Makarychev, 2016] approximation algorithms for hypergraph conductance, little is known regarding the approximability of other hypergraph partitioning objectives.
This work proposes algorithms for a general model of hypergraph partitioning that unifies both undirected and directed versions of many well-studied partitioning objectives. The first contribution of this paper introduces polymatroidal cut functions, a large class of cut functions amenable to approximation algorithms via metric embeddings and routing multicommodity flows. We demonstrate a simple O(√{log n})-approximation, where n is the number of vertices in the hypergraph, for these problems by rounding relaxations to metrics of negative-type.
The second contribution of this paper generalizes the cut-matching game framework of Khandekar et al. [Khandekar et al., 2007] to tackle polymatroidal cut functions. This yields an almost-linear time O(log n)-approximation algorithm for standard versions of undirected and directed hypergraph partitioning [Kwok et al., 2022]. A technical contribution of our construction is a novel cut-matching game, which greatly relaxes the set of allowed actions by the cut player and allows for the use of approximate s-t maximum flows by the matching player. We believe this to be of independent interest