164 research outputs found
Recommended from our members
Complexity Theory
Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes
Broadcasting on Random Directed Acyclic Graphs
We study a generalization of the well-known model of broadcasting on trees.
Consider a directed acyclic graph (DAG) with a unique source vertex , and
suppose all other vertices have indegree . Let the vertices at
distance from be called layer . At layer , is given a random
bit. At layer , each vertex receives bits from its parents in
layer , which are transmitted along independent binary symmetric channel
edges, and combines them using a -ary Boolean processing function. The goal
is to reconstruct with probability of error bounded away from using
the values of all vertices at an arbitrarily deep layer. This question is
closely related to models of reliable computation and storage, and information
flow in biological networks.
In this paper, we analyze randomly constructed DAGs, for which we show that
broadcasting is only possible if the noise level is below a certain degree and
function dependent critical threshold. For , and random DAGs with
layer sizes and majority processing functions, we identify the
critical threshold. For , we establish a similar result for NAND
processing functions. We also prove a partial converse for odd
illustrating that the identified thresholds are impossible to improve by
selecting different processing functions if the decoder is restricted to using
a single vertex.
Finally, for any noise level, we construct explicit DAGs (using expander
graphs) with bounded degree and layer sizes admitting
reconstruction. In particular, we show that such DAGs can be generated in
deterministic quasi-polynomial time or randomized polylogarithmic time in the
depth. These results portray a doubly-exponential advantage for storing a bit
in DAGs compared to trees, where but layer sizes must grow exponentially
with depth in order to enable broadcasting.Comment: 33 pages, double column format. arXiv admin note: text overlap with
arXiv:1803.0752
From the Quantum Approximate Optimization Algorithm to a Quantum Alternating Operator Ansatz
The next few years will be exciting as prototype universal quantum processors
emerge, enabling implementation of a wider variety of algorithms. Of particular
interest are quantum heuristics, which require experimentation on quantum
hardware for their evaluation, and which have the potential to significantly
expand the breadth of quantum computing applications. A leading candidate is
Farhi et al.'s Quantum Approximate Optimization Algorithm, which alternates
between applying a cost-function-based Hamiltonian and a mixing Hamiltonian.
Here, we extend this framework to allow alternation between more general
families of operators. The essence of this extension, the Quantum Alternating
Operator Ansatz, is the consideration of general parametrized families of
unitaries rather than only those corresponding to the time-evolution under a
fixed local Hamiltonian for a time specified by the parameter. This ansatz
supports the representation of a larger, and potentially more useful, set of
states than the original formulation, with potential long-term impact on a
broad array of application areas. For cases that call for mixing only within a
desired subspace, refocusing on unitaries rather than Hamiltonians enables more
efficiently implementable mixers than was possible in the original framework.
Such mixers are particularly useful for optimization problems with hard
constraints that must always be satisfied, defining a feasible subspace, and
soft constraints whose violation we wish to minimize. More efficient
implementation enables earlier experimental exploration of an alternating
operator approach to a wide variety of approximate optimization, exact
optimization, and sampling problems. Here, we introduce the Quantum Alternating
Operator Ansatz, lay out design criteria for mixing operators, detail mappings
for eight problems, and provide brief descriptions of mappings for diverse
problems.Comment: 51 pages, 2 figures. Revised to match journal pape
Applications of Derandomization Theory in Coding
Randomized techniques play a fundamental role in theoretical computer science
and discrete mathematics, in particular for the design of efficient algorithms
and construction of combinatorial objects. The basic goal in derandomization
theory is to eliminate or reduce the need for randomness in such randomized
constructions. In this thesis, we explore some applications of the fundamental
notions in derandomization theory to problems outside the core of theoretical
computer science, and in particular, certain problems related to coding theory.
First, we consider the wiretap channel problem which involves a communication
system in which an intruder can eavesdrop a limited portion of the
transmissions, and construct efficient and information-theoretically optimal
communication protocols for this model. Then we consider the combinatorial
group testing problem. In this classical problem, one aims to determine a set
of defective items within a large population by asking a number of queries,
where each query reveals whether a defective item is present within a specified
group of items. We use randomness condensers to explicitly construct optimal,
or nearly optimal, group testing schemes for a setting where the query outcomes
can be highly unreliable, as well as the threshold model where a query returns
positive if the number of defectives pass a certain threshold. Finally, we
design ensembles of error-correcting codes that achieve the
information-theoretic capacity of a large class of communication channels, and
then use the obtained ensembles for construction of explicit capacity achieving
codes.
[This is a shortened version of the actual abstract in the thesis.]Comment: EPFL Phd Thesi
When could NISQ algorithms start to create value in discrete manufacturing ?
Are quantum advantages in discrete manufacturing achievable in the near term?
As manufacturing-relevant NISQ algorithms, we identified Quantum Annealing (QA)
and the Quantum Approximate Optimization Algorithm (QAOA) for combinatorial
optimization as well as Derivative Quantum Circuits (DQC) for solving
non-linear PDEs. While there is evidence for QAOA's outperformance, this
requires post-NISQ circuit depths. In the case of QA, there is up to now no
unquestionable evidence for advantage compared to classical computation. Yet
different protocols could lead to finding such instances. Together with a
well-chosen quantum feature map, DQC are a promising concept. Further
investigations for higher dimensional problems and improvements in training
could follow.Comment: 39 pages (thesis
Recommended from our members
On computationally efficient learning for stabilizers and beyond
Artificial intelligence, big data, machine learning, neural networks â look up any recent research proposal and with good probability at least one of these phrases will appear. Itâs no secret that learning has taken this era of computer science by storm in our attempt to create software that perform extremely complicated tasks. As one of the most accurate models of our physical world currently known, it then makes sense to think about what kinds of quantum systems can or cannot be learned. As with many problems in quantum information and quantum computing, the simplest non-trivial versions of these problems start with the stabilizer formalism. In this dissertation, we examine learning problems centered around the stabilizer formalism in various different models from a theoretical standpoint using the tools of computer science and quantum information. Specifically, our focus will be on computational complexity, rather than sample complexity. We begin by looking at learning in the tomographical sense. Here, one has black-box access to copies of an unknown quantum state |Ïâ© and want to learn properties of the state or outright given an approximation of |Ïâ©. In this setting, [Mon17] gave an efficient learning algorithm for stabilizer states. The key algorithmic tool was Bell difference sampling, which allows one to sample from the stabilizer group of a stabilizer state. [GNW21] extended the analysis of Bell difference sampling beyond just stabilizer states. Throughout Part I we turn to Bell difference sampling to improve upon learning algorithms for states with only a few (i.e., either O(log n) or strictly less than n depending on context) T gates. By using symplectic Fourier analysis, which is the generalization of Boolean Fourier analysis for a symplectic vector space over [superscript 2n over subscript 2], we derive powerful tools to understand the Bell difference sampling distribution. With these tools we first give a tolerant property testing algorithm for stabilizer states. That is, we give an algorithm that distinguishes whether a state is Δ1 close to some stabilizer state or Δ2 far from all stabilizer states for certain parameter regimes of Δ1 and Δ2. We use our improved knowledge of Bell difference sampling to improve upon the completeness and soundness analysis of the property tester given by [GNW21], which is not tolerant. A second application is stabilizer fidelity estimation and approximation. Given a state |Ïâ© that is O(1) close to a stabilizer state, we output such a stabilizer state in time 2 [superscript O(n)]. This beats the previous 2 [superscript O(nÂČ)] brute force search algorithm. Having such a stabilizer state also lets us figure out how close |Ïâ© is to being stabilizer. A third application is extending Montanaroâs learning algorithm to the output of Clifford + O(log n) non-Clifford gate circuits. More generally, our algorithm interpolates between Montanaroâs algorithm and pure state tomography algorithms with runtime that is poly(n)âexp(t) where t is the number of non-Clifford gates. This asymptotically matches the runtime of classical simulation algorithms for such circuits. A key algorithmic step in this work is the ability to âcompressâ the âstabilizer-nessâ of a state onto a few qubits, allowing the ânon-stabilizer-nessâ to be brute-forced on the remaining qubits. Our final application is pseudorandomness lower bounds. Introduced by [JLS18], a pseudorandom quantum state ensemble is a set of quantum states that are computationally indistinguishable from Haar random. By re-purposing algorithms from above, we produce a test that behaves differently when given a state produced by less than n T gates in a Clifford + T circuit versus being given a Haar random state. We note that this is tight assuming the existence of linear-time quantum-secure One-Way Functions. Pivoting now, we also study the stabilizer formalism in the PAC learning framework proposed by [Val84]. Here one does not have control over the measurements, but must make do regardless (within information theoretic limits). We analyze the problem in two ways. First we show that, unlike stabilizer states, learning the associated Clifford unitaries in the proper PAC model is NP-hard. This is done by a reduction to the problem of finding a full rank matrix in an affine subspace of matrices over â. The second is studying stabilizer states in the presence of noise. We utilize the Statistical Query framework, a popular modification to the PAC learning framework that is inherently tolerant to noise. There, we also show hardness in this framework by a reduction to Learning Parities with Noise. This gives evidence that even in the PAC model stabilizer states are hard to learn with noise.Computer Science
- âŠ