14 research outputs found
Hardness of approximation for quantum problems
The polynomial hierarchy plays a central role in classical complexity theory.
Here, we define a quantum generalization of the polynomial hierarchy, and
initiate its study. We show that not only are there natural complete problems
for the second level of this quantum hierarchy, but that these problems are in
fact hard to approximate. Using these techniques, we also obtain hardness of
approximation for the class QCMA. Our approach is based on the use of
dispersers, and is inspired by the classical results of Umans regarding
hardness of approximation for the second level of the classical polynomial
hierarchy [Umans, FOCS 1999]. The problems for which we prove hardness of
approximation for include, among others, a quantum version of the Succinct Set
Cover problem, and a variant of the local Hamiltonian problem with hybrid
classical-quantum ground states.Comment: 21 pages, 1 figure, extended abstract appeared in Proceedings of the
39th International Colloquium on Automata, Languages and Programming (ICALP),
pages 387-398, Springer, 201
On the Computational Complexity of MapReduce
In this paper we study MapReduce computations from a complexity-theoretic
perspective. First, we formulate a uniform version of the MRC model of Karloff
et al. (2010). We then show that the class of regular languages, and moreover
all of sublogarithmic space, lies in constant round MRC. This result also
applies to the MPC model of Andoni et al. (2014). In addition, we prove that,
conditioned on a variant of the Exponential Time Hypothesis, there are strict
hierarchies within MRC so that increasing the number of rounds or the amount of
time per processor increases the power of MRC. To the best of our knowledge we
are the first to approach the MapReduce model with complexity-theoretic
techniques, and our work lays the foundation for further analysis relating
MapReduce to established complexity classes
MODEL SIMULASI PENYELESAIAN MASALAH PERJALANAN PENJUAL MENGGUNAKAN PENDEKATAN KECERDASAN BUATAN, OPTIMISASI KOLONI SEMUT
Salah satu hal yang menarik dalam bidang perangkat lunak adalah ditemukannya algoritma pengoptimisasian. Banyak pekerjaan yang rumit dan kompleks yang akan mustahil untuk dilakukan secara manual ataupun kalau terpaksa dilakukan dengan cara manual akan membutuhkan waktu dan tenaga yang sangat besar. Dengan adanya algoritma optimisasian pekerjaan yang rumit dan kompleks tadi dapat dilakukan dengan lebih mudah dan lebih cepat. Bahkan juga memeberikan jaminan secara teoritis, untuk mendapatkan solusi yang terbaik. Dalam penelitian ini akan dibangun sebuah model simulasi perangkat lunak untuk menyelesaikan masalah perjalanan penjual dengan menggunakan algoritma optimisasi koloni semut, untuk memberikan visual bagi pengguna bagaimana masalah tersebut dapat diselesaikan secara Langkah demi Langkah. Pembangunan program simulasi menggunakan metode pengembangan perangkat lunak Extreme Programming (XP) pada lingkungan system operasi Windows dengan menggunakan Bahasa pemrograman C# pada Visual Studio 2019. Hasil dari penelitian didapati, program dapat memberikan visualisasi/simulasi yang baik kepada pengguna
Robust Simulations and Significant Separations
We define and study a new notion of "robust simulations" between complexity
classes which is intermediate between the traditional notions of
infinitely-often and almost-everywhere, as well as a corresponding notion of
"significant separations". A language L has a robust simulation in a complexity
class C if there is a language in C which agrees with L on arbitrarily large
polynomial stretches of input lengths. There is a significant separation of L
from C if there is no robust simulation of L in C. The new notion of simulation
is a cleaner and more natural notion of simulation than the infinitely-often
notion. We show that various implications in complexity theory such as the
collapse of PH if NP = P and the Karp-Lipton theorem have analogues for robust
simulations. We then use these results to prove that most known separations in
complexity theory, such as hierarchy theorems, fixed polynomial circuit lower
bounds, time-space tradeoffs, and the theorems of Allender and Williams, can be
strengthened to significant separations, though in each case, an almost
everywhere separation is unknown.
Proving our results requires several new ideas, including a completely
different proof of the hierarchy theorem for non-deterministic polynomial time
than the ones previously known
Alternating Hierarchies for Time-Space Tradeoffs
Nepomnjascii's Theorem states that for all 0 0 the
class of languages recognized in nondeterministic time n^k and space
n^\epsilon, NTISP[n^k, n^\epsilon ], is contained in the linear time hierarchy.
By considering restrictions on the size of the universal quantifiers in the
linear time hierarchy, this paper refines Nepomnjascii's result to give a sub-
hierarchy, Eu-LinH, of the linear time hierarchy that is contained in NP and
which contains NTISP[n^k, n^\epsilon ]. Hence, Eu-LinH contains NL and SC. This
paper investigates basic structural properties of Eu-LinH. Then the
relationships between Eu-LinH and the classes NL, SC, and NP are considered to
see if they can shed light on the NL = NP or SC = NP questions. Finally, a new
hierarchy, zeta -LinH, is defined to reduce the space requirements needed for
the upper bound on Eu-LinH.Comment: 14 page
Easiness Amplification and Uniform Circuit Lower Bounds
We present new consequences of the assumption that time-bounded algorithms can be "compressed" with non-uniform circuits. Our main contribution is an "easiness amplification" lemma for circuits. One instantiation of the lemma says: if n^{1+e}-time, tilde{O}(n)-space computations have n^{1+o(1)} size (non-uniform) circuits for some e > 0, then every problem solvable in polynomial time and tilde{O}(n) space has n^{1+o(1)} size (non-uniform) circuits as well. This amplification has several consequences:
* An easy problem without small LOGSPACE-uniform circuits. For all e > 0, we give a natural decision problem, General Circuit n^e-Composition, that is solvable in about n^{1+e} time, but we prove that polynomial-time and logarithmic-space preprocessing cannot produce n^{1+o(1)}-size circuits for the problem. This shows that there are problems solvable in n^{1+e} time which are not in LOGSPACE-uniform n^{1+o(1)} size, the first result of its kind. We show that our lower bound is non-relativizing, by exhibiting an oracle relative to which the result is false.
* Problems without low-depth LOGSPACE-uniform circuits. For all e > 0, 1 < d < 2, and e < d we give another natural circuit composition problem computable in tilde{O}(n^{1+e}) time, or in O((log n)^d) space (though not necessarily simultaneously) that we prove does not have SPACE[(log n)^e]-uniform circuits of tilde{O}(n) size and O((log n)^e) depth. We also show SAT does not have circuits of tilde{O}(n) size and log^{2-o(1)}(n) depth that can be constructed in log^{2-o(1)}(n) space.
* A strong circuit complexity amplification. For every e > 0, we give a natural circuit composition problem and show that if it has tilde{O}(n)-size circuits (uniform or not), then every problem solvable in 2^{O(n)} time and 2^{O(sqrt{n log n})} space (simultaneously) has 2^{O(sqrt{n log n})}-size circuits (uniform or not). We also show the same consequence holds assuming SAT has tilde{O}(n)-size circuits. As a corollary, if n^{1.1} time computations (or O(n) nondeterministic time computations) have tilde{O}(n)-size circuits, then all problems in exponential time and subexponential space (such as quantified Boolean formulas) have significantly subexponential-size circuits. This is a new connection between the relative circuit complexities of easy and hard problems
A Quantum Time-Space Lower Bound for the Counting Hierarchy
We obtain the first nontrivial time-space lower bound for quantum algorithms
solving problems related to satisfiability. Our bound applies to MajSAT and
MajMajSAT, which are complete problems for the first and second levels of the
counting hierarchy, respectively. We prove that for every real d and every
positive real epsilon there exists a real c>1 such that either: MajMajSAT does
not have a quantum algorithm with bounded two-sided error that runs in time
n^c, or MajSAT does not have a quantum algorithm with bounded two-sided error
that runs in time n^d and space n^{1-\epsilon}. In particular, MajMajSAT cannot
be solved by a quantum algorithm with bounded two-sided error running in time
n^{1+o(1)} and space n^{1-\epsilon} for any epsilon>0. The key technical
novelty is a time- and space-efficient simulation of quantum computations with
intermediate measurements by probabilistic machines with unbounded error. We
also develop a model that is particularly suitable for the study of general
quantum computations with simultaneous time and space bounds. However, our
arguments hold for any reasonable uniform model of quantum computation.Comment: 25 page
Bounded Relativization
Relativization is one of the most fundamental concepts in complexity theory, which explains the difficulty of resolving major open problems. In this paper, we propose a weaker notion of relativization called bounded relativization. For a complexity class ?, we say that a statement is ?-relativizing if the statement holds relative to every oracle ? ? ?. It is easy to see that every result that relativizes also ?-relativizes for every complexity class ?. On the other hand, we observe that many non-relativizing results, such as IP = PSPACE, are in fact PSPACE-relativizing.
First, we use the idea of bounded relativization to obtain new lower bound results, including the following nearly maximum circuit lower bound: for every constant ? > 0, BPE^{MCSP}/2^{?n} ? SIZE[2?/n].
We prove this by PSPACE-relativizing the recent pseudodeterministic pseudorandom generator by Lu, Oliveira, and Santhanam (STOC 2021).
Next, we study the limitations of PSPACE-relativizing proof techniques, and show that a seemingly minor improvement over the known results using PSPACE-relativizing techniques would imply a breakthrough separation NP ? L. For example:
- Impagliazzo and Wigderson (JCSS 2001) proved that if EXP ? BPP, then BPP admits infinitely-often subexponential-time heuristic derandomization. We show that their result is PSPACE-relativizing, and that improving it to worst-case derandomization using PSPACE-relativizing techniques implies NP ? L.
- Oliveira and Santhanam (STOC 2017) recently proved that every dense subset in P admits an infinitely-often subexponential-time pseudodeterministic construction, which we observe is PSPACE-relativizing. Improving this to almost-everywhere (pseudodeterministic) or (infinitely-often) deterministic constructions by PSPACE-relativizing techniques implies NP ? L.
- Santhanam (SICOMP 2009) proved that pr-MA does not have fixed polynomial-size circuits. This lower bound can be shown PSPACE-relativizing, and we show that improving it to an almost-everywhere lower bound using PSPACE-relativizing techniques implies NP ? L.
In fact, we show that if we can use PSPACE-relativizing techniques to obtain the above-mentioned improvements, then PSPACE ? EXPH. We obtain our barrier results by constructing suitable oracles computable in EXPH relative to which these improvements are impossible
Approximation, Proof Systems, and Correlations in a Quantum World
This thesis studies three topics in quantum computation and information: The
approximability of quantum problems, quantum proof systems, and non-classical
correlations in quantum systems.
In the first area, we demonstrate a polynomial-time (classical) approximation
algorithm for dense instances of the canonical QMA-complete quantum constraint
satisfaction problem, the local Hamiltonian problem. In the opposite direction,
we next introduce a quantum generalization of the polynomial-time hierarchy,
and define problems which we prove are not only complete for the second level
of this hierarchy, but are in fact hard to approximate.
In the second area, we study variants of the interesting and stubbornly open
question of whether a quantum proof system with multiple unentangled quantum
provers is equal in expressive power to a proof system with a single quantum
prover. Our results concern classes such as BellQMA(poly), and include a novel
proof of perfect parallel repetition for SepQMA(m) based on cone programming
duality.
In the third area, we study non-classical quantum correlations beyond
entanglement, often dubbed "non-classicality". Among our results are two novel
schemes for quantifying non-classicality: The first proposes the new paradigm
of exploiting local unitary operations to study non-classical correlations, and
the second introduces a protocol through which non-classical correlations in a
starting system can be "activated" into distillable entanglement with an
ancilla system.
An introduction to all required linear algebra and quantum mechanics is
included.Comment: PhD Thesis, 240 page