936 research outputs found
Classical and quantum algorithms for scaling problems
This thesis is concerned with scaling problems, which have a plethora of connections to different areas of mathematics, physics and computer science. Although many structural aspects of these problems are understood by now, we only know how to solve them efficiently in special cases.We give new algorithms for non-commutative scaling problems with complexity guarantees that match the prior state of the art. To this end, we extend the well-known (self-concordance based) interior-point method (IPM) framework to Riemannian manifolds, motivated by its success in the commutative setting. Moreover, the IPM framework does not obviously suffer from the same obstructions to efficiency as previous methods. It also yields the first high-precision algorithms for other natural geometric problems in non-positive curvature.For the (commutative) problems of matrix scaling and balancing, we show that quantum algorithms can outperform the (already very efficient) state-of-the-art classical algorithms. Their time complexity can be sublinear in the input size; in certain parameter regimes they are also optimal, whereas in others we show no quantum speedup over the classical methods is possible. Along the way, we provide improvements over the long-standing state of the art for searching for all marked elements in a list, and computing the sum of a list of numbers.We identify a new application in the context of tensor networks for quantum many-body physics. We define a computable canonical form for uniform projected entangled pair states (as the solution to a scaling problem), circumventing previously known undecidability results. We also show, by characterizing the invariant polynomials, that the canonical form is determined by evaluating the tensor network contractions on networks of bounded size
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Constructible sheaves on schemes
We present a uniform theory of constructible sheaves on arbitrary schemes
with coefficients in topological or even condensed rings. This is accomplished
by defining lisse sheaves to be the dualizable objects in the derived
infinity-category of pro\'etale sheaves, while constructible sheaves are those
that are lisse on a stratification. We show that constructible sheaves satisfy
pro\'etale descent. We also establish a t-structure on constructible sheaves in
a wide range of cases. We finally provide a toolset to manipulate categories of
constructible sheaves with respect to the choices of coefficient rings, and use
this to prove that our notions reproduce and extend the various approaches to,
say, constructible ell-adic sheaves in the literature.Comment: This paper has been split off arXiv:2012.02853. Comments welcome
Special Delivery: Programming with Mailbox Types (Extended Version)
The asynchronous and unidirectional communication model supported by
mailboxes is a key reason for the success of actor languages like Erlang and
Elixir for implementing reliable and scalable distributed systems. While many
actors may send messages to some actor, only the actor may (selectively)
receive from its mailbox. Although actors eliminate many of the issues stemming
from shared memory concurrency, they remain vulnerable to communication errors
such as protocol violations and deadlocks.
Mailbox types are a novel behavioural type system for mailboxes first
introduced for a process calculus by de'Liguoro and Padovani in 2018, which
capture the contents of a mailbox as a commutative regular expression. Due to
aliasing and nested evaluation contexts, moving from a process calculus to a
programming language is challenging.
This paper presents Pat, the first programming language design incorporating
mailbox types, and describes an algorithmic type system. We make essential use
of quasi-linear typing to tame some of the complexity introduced by aliasing.
Our algorithmic type system is necessarily co-contextual, achieved through a
novel use of backwards bidirectional typing, and we prove it sound and complete
with respect to our declarative type system. We implement a prototype type
checker, and use it to demonstrate the expressiveness of Pat on a factory
automation case study and a series of examples from the Savina actor benchmark
suite.Comment: Extended version of paper accepted to ICFP'2
(b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!)
(b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!
The Potts model and the independence polynomial:Uniqueness of the Gibbs measure and distributions of complex zeros
Part 1 of this dissertation studies the antiferromagnetic Potts model, which originates in statistical physics. In particular the transition from multiple Gibbs measures to a unique Gibbs measure for the antiferromagnetic Potts model on the infinite regular tree is studied. This is called a uniqueness phase transition. A folklore conjecture about the parameter at which the uniqueness phase transition occurs is partly confirmed. The proof uses a geometric condition, which comes from analysing an associated dynamical system.Part 2 of this dissertation concerns zeros of the independence polynomial. The independence polynomial originates in statistical physics as the partition function of the hard-core model. The location of the complex zeros of the independence polynomial is related to phase transitions in terms of the analycity of the free energy and plays an important role in the design of efficient algorithms to approximately compute evaluations of the independence polynomial. Chapter 5 directly relates the location of the complex zeros of the independence polynomial to computational hardness of approximating evaluations of the independence polynomial. This is done by moreover relating the set of zeros of the independence polynomial to chaotic behaviour of a naturally associated family of rational functions; the occupation ratios. Chapter 6 studies boundedness of zeros of the independence polynomial of tori for sequences of tori converging to the integer lattice. It is shown that zeros are bounded for sequences of balanced tori, but unbounded for sequences of highly unbalanced tori
Indeterminacy and the law of the excluded middle
This thesis is an investigation into indeterminacy in the foundations of mathematics and its possible consequences for the applicability of the law of the excluded middle (LEM). It characterises different ways in which the natural numbers as well as the sets may be understood to be indeterminate, and asks in what sense this would cease to support applicability of LEM to reasoning with them. The first part of the thesis reviews the indeterminacy phenomena on which the argument is based and argues for a distinction between two notions of indeterminacy: a) indeterminacy as applied to domains and b) indefiniteness as applied to concepts. It then addresses possible attempts to secure determinacy in both cases. The second part of the thesis discusses the advantages that an argument from indeterminacy has over traditional intuitionistic arguments against LEM, and it provides the framework in which conditions for the applicability of LEM can be explicated in the setting of indeterminacy. The final part of the thesis then applies these findings to concrete cases of indeterminacy. With respect to indeterminacy of domains, I note some problems for establishing a rejection of LEM based on the indeterminacy of the height of the set theoretic hierarchy. I show that a coherent argument can be made for the rejection of LEM based on the indeterminacy of its width, and assess its philosophical commitments. A final chapter addresses the notion of indefiniteness of our concepts of set and number and asks how this might affect the applicability of LEM
Algorithms for sparse convolution and sublinear edit distance
In this PhD thesis on fine-grained algorithm design and complexity, we investigate output-sensitive and sublinear-time algorithms for two important problems. (1) Sparse Convolution: Computing the convolution of two vectors is a basic algorithmic primitive with applications across all of Computer Science and Engineering. In the sparse convolution problem we assume that the input and output vectors have at most t nonzero entries, and the goal is to design algorithms with running times dependent on t. For the special case where all entries are nonnegative, which is particularly important for algorithm design, it is known since twenty years that sparse convolutions can be computed in near-linear randomized time O(t log^2 n). In this thesis we develop a randomized algorithm with running time O(t \log t) which is optimal (under some mild assumptions), and the first near-linear deterministic algorithm for sparse nonnegative convolution. We also present an application of these results, leading to seemingly unrelated fine-grained lower bounds against distance oracles in graphs. (2) Sublinear Edit Distance: The edit distance of two strings is a well-studied similarity measure with numerous applications in computational biology. While computing the edit distance exactly provably requires quadratic time, a long line of research has lead to a constant-factor approximation algorithm in almost-linear time. Perhaps surprisingly, it is also possible to approximate the edit distance k within a large factor O(k) in sublinear time O~(n/k + poly(k)). We drastically improve the approximation factor of the known sublinear algorithms from O(k) to k^{o(1)} while preserving the O(n/k + poly(k)) running time.In dieser Doktorarbeit über feinkörnige Algorithmen und Komplexität untersuchen wir ausgabesensitive Algorithmen und Algorithmen mit sublinearer Lauf-zeit für zwei wichtige Probleme. (1) Dünne Faltungen: Die Berechnung der Faltung zweier Vektoren ist ein grundlegendes algorithmisches Primitiv, das in allen Bereichen der Informatik und des Ingenieurwesens Anwendung findet. Für das dünne Faltungsproblem nehmen wir an, dass die Eingabe- und Ausgabevektoren höchstens t Einträge ungleich Null haben, und das Ziel ist, Algorithmen mit Laufzeiten in Abhängigkeit von t zu entwickeln. Für den speziellen Fall, dass alle Einträge nicht-negativ sind, was insbesondere für den Entwurf von Algorithmen relevant ist, ist seit zwanzig Jahren bekannt, dass dünn besetzte Faltungen in nahezu linearer randomisierter Zeit O(t \log^2 n) berechnet werden können. In dieser Arbeit entwickeln wir einen randomisierten Algorithmus mit Laufzeit O(t \log t), der (unter milden Annahmen) optimal ist, und den ersten nahezu linearen deterministischen Algorithmus für dünne nichtnegative Faltungen. Wir stellen auch eine Anwendung dieser Ergebnisse vor, die zu scheinbar unverwandten feinkörnigen unteren Schranken gegen Distanzorakel in Graphen führt. (2) Sublineare Editierdistanz: Die Editierdistanz zweier Zeichenketten ist ein gut untersuchtes Ähnlichkeitsmaß mit zahlreichen Anwendungen in der Computerbiologie. Während die exakte Berechnung der Editierdistanz nachweislich quadratische Zeit erfordert, hat eine lange Reihe von Forschungsarbeiten zu einem Approximationsalgorithmus mit konstantem Faktor in fast-linearer Zeit geführt. Überraschenderweise ist es auch möglich, die Editierdistanz k innerhalb eines großen Faktors O(k) in sublinearer Zeit O~(n/k + poly(k)) zu approximieren. Wir verbessern drastisch den Approximationsfaktor der bekannten sublinearen Algorithmen von O(k) auf k^{o(1)} unter Beibehaltung der O(n/k + poly(k))-Laufzeit
Geometric optimization problems in quantum computation and discrete mathematics: Stabilizer states and lattices
This thesis consists of two parts:
Part I deals with properties of stabilizer states and their convex
hull, the stabilizer polytope. Stabilizer states, Pauli measurements
and Clifford unitaries are the three building blocks of the stabilizer
formalism whose computational power is limited by the Gottesman-
Knill theorem. This model is usually enriched by a magic state to get
a universal model for quantum computation, referred to as quantum
computation with magic states (QCM). The first part of this thesis
will investigate the role of stabilizer states within QCM from three
different angles.
The first considered quantity is the stabilizer extent, which provides
a tool to measure the non-stabilizerness or magic of a quantum state.
It assigns a quantity to each state roughly measuring how many stabilizer
states are required to approximate the state. It has been shown
that the extent is multiplicative under taking tensor products when
the considered state is a product state whose components are composed
of maximally three qubits. In Chapter 2, we will prove that
this property does not hold in general, more precisely, that the stabilizer
extent is strictly submultiplicative. We obtain this result as
a consequence of rather general properties of stabilizer states. Informally
our result implies that one should not expect a dictionary to be
multiplicative under taking tensor products whenever the dictionary
size grows subexponentially in the dimension.
In Chapter 3, we consider QCM from a resource theoretic perspective.
The resource theory of magic is based on two types of quantum
channels, completely stabilizer preserving maps and stabilizer operations.
Both classes have the property that they cannot generate additional
magic resources. We will show that these two classes of quantum
channels do not coincide, specifically, that stabilizer operations are a
strict subset of the set of completely stabilizer preserving channels.
This might have the consequence that certain tasks which are usually
realized by stabilizer operations could in principle be performed better
by completely stabilizer preserving maps.
In Chapter 4, the last one of Part I, we consider QCM via the polar
dual stabilizer polytope (also called the Lambda-polytope). This polytope
is a superset of the quantum state space and every quantum state
can be written as a convex combination of its vertices. A way to
classically simulate quantum computing with magic states is based on
simulating Pauli measurements and Clifford unitaries on the vertices
of the  Lambda-polytope. The complexity of classical simulation with respect
to the polytope   is determined by classically simulating the updates
of vertices under Clifford unitaries and Pauli measurements. However,
a complete description of this polytope as a convex hull of its vertices is
only known in low dimensions (for up to two qubits or one qudit when
odd dimensional systems are considered). We make progress on this
question by characterizing a certain class of operators that live on the
boundary of the  Lambda-polytope when the underlying dimension is an odd
prime. This class encompasses for instance Wigner operators, which
have been shown to be vertices of  Lambda. We conjecture that this class
contains even more vertices of  Lambda. Eventually, we will shortly sketch
why applying Clifford unitaries and Pauli measurements to this class
of operators can be efficiently classically simulated.
Part II of this thesis deals with lattices. Lattices are discrete subgroups
of the Euclidean space. They occur in various different areas of
mathematics, physics and computer science. We will investigate two
types of optimization problems related to lattices.
In Chapter 6 we are concerned with optimization within the space of
lattices. That is, we want to compare the Gaussian potential energy
of different lattices. To make the energy of lattices comparable we
focus on lattices with point density one. In particular, we focus on
even unimodular lattices and show that, up to dimension 24, they are
all critical for the Gaussian potential energy. Furthermore, we find
that all n-dimensional even unimodular lattices with n   24 are local
minima or saddle points. In contrast in dimension 32, there are even
unimodular lattices which are local maxima and others which are not
even critical.
In Chapter 7 we consider flat tori R^n/L, where L is an n-dimensional
lattice. A flat torus comes with a metric and our goal is to approximate
this metric with a Hilbert space metric. To achieve this, we
derive an infinite-dimensional semidefinite optimization program that
computes the least distortion embedding of the metric space R^n/L into
a Hilbert space. This program allows us to make several interesting
statements about the nature of least distortion embeddings of flat tori.
In particular, we give a simple proof for a lower bound which gives
a constant factor improvement over the previously best lower bound
on the minimal distortion of an embedding of an n-dimensional flat
torus. Furthermore, we show that there is always an optimal embedding
into a finite-dimensional Hilbert space. Finally, we construct
optimal least distortion embeddings for the standard torus R^n/Z^n and
all 2-dimensional flat tori
- …