6,477 research outputs found
Computational Complexity in Electronic Structure
In quantum chemistry, the price paid by all known efficient model chemistries
is either the truncation of the Hilbert space or uncontrolled approximations.
Theoretical computer science suggests that these restrictions are not mere
shortcomings of the algorithm designers and programmers but could stem from the
inherent difficulty of simulating quantum systems. Extensions of computer
science and information processing exploiting quantum mechanics has led to new
ways of understanding the ultimate limitations of computational power.
Interestingly, this perspective helps us understand widely used model
chemistries in a new light. In this article, the fundamentals of computational
complexity will be reviewed and motivated from the vantage point of chemistry.
Then recent results from the computational complexity literature regarding
common model chemistries including Hartree-Fock and density functional theory
are discussed.Comment: 14 pages, 2 figures, 1 table. Comments welcom
Sculpting Quantum Speedups
Given a problem which is intractable for both quantum and classical
algorithms, can we find a sub-problem for which quantum algorithms provide an
exponential advantage? We refer to this problem as the "sculpting problem." In
this work, we give a full characterization of sculptable functions in the query
complexity setting. We show that a total function f can be restricted to a
promise P such that Q(f|_P)=O(polylog(N)) and R(f|_P)=N^{Omega(1)}, if and only
if f has a large number of inputs with large certificate complexity. The proof
uses some interesting techniques: for one direction, we introduce new
relationships between randomized and quantum query complexity in various
settings, and for the other direction, we use a recent result from
communication complexity due to Klartag and Regev. We also characterize
sculpting for other query complexity measures, such as R(f) vs. R_0(f) and
R_0(f) vs. D(f).
Along the way, we prove some new relationships for quantum query complexity:
for example, a nearly quadratic relationship between Q(f) and D(f) whenever the
promise of f is small. This contrasts with the recent super-quadratic query
complexity separations, showing that the maximum gap between classical and
quantum query complexities is indeed quadratic in various settings - just not
for total functions!
Lastly, we investigate sculpting in the Turing machine model. We show that if
there is any BPP-bi-immune language in BQP, then every language outside BPP can
be restricted to a promise which places it in PromiseBQP but not in PromiseBPP.
Under a weaker assumption, that some problem in BQP is hard on average for
P/poly, we show that every paddable language outside BPP is sculptable in this
way.Comment: 30 page
Smoothed Complexity Theory
Smoothed analysis is a new way of analyzing algorithms introduced by Spielman
and Teng (J. ACM, 2004). Classical methods like worst-case or average-case
analysis have accompanying complexity classes, like P and AvgP, respectively.
While worst-case or average-case analysis give us a means to talk about the
running time of a particular algorithm, complexity classes allows us to talk
about the inherent difficulty of problems.
Smoothed analysis is a hybrid of worst-case and average-case analysis and
compensates some of their drawbacks. Despite its success for the analysis of
single algorithms and problems, there is no embedding of smoothed analysis into
computational complexity theory, which is necessary to classify problems
according to their intrinsic difficulty.
We propose a framework for smoothed complexity theory, define the relevant
classes, and prove some first hardness results (of bounded halting and tiling)
and tractability results (binary optimization problems, graph coloring,
satisfiability). Furthermore, we discuss extensions and shortcomings of our
model and relate it to semi-random models.Comment: to be presented at MFCS 201
Computation in generalised probabilistic theories
From the existence of an efficient quantum algorithm for factoring, it is
likely that quantum computation is intrinsically more powerful than classical
computation. At present, the best upper bound known for the power of quantum
computation is that BQP is in AWPP. This work investigates limits on
computational power that are imposed by physical principles. To this end, we
define a circuit-based model of computation in a class of operationally-defined
theories more general than quantum theory, and ask: what is the minimal set of
physical assumptions under which the above inclusion still holds? We show that
given only an assumption of tomographic locality (roughly, that multipartite
states can be characterised by local measurements), efficient computations are
contained in AWPP. This inclusion still holds even without assuming a basic
notion of causality (where the notion is, roughly, that probabilities for
outcomes cannot depend on future measurement choices). Following Aaronson, we
extend the computational model by allowing post-selection on measurement
outcomes. Aaronson showed that the corresponding quantum complexity class is
equal to PP. Given only the assumption of tomographic locality, the inclusion
in PP still holds for post-selected computation in general theories. Thus in a
world with post-selection, quantum theory is optimal for computation in the
space of all general theories. We then consider if relativised complexity
results can be obtained for general theories. It is not clear how to define a
sensible notion of an oracle in the general framework that reduces to the
standard notion in the quantum case. Nevertheless, it is possible to define
computation relative to a `classical oracle'. Then, we show there exists a
classical oracle relative to which efficient computation in any theory
satisfying the causality assumption and tomographic locality does not include
NP.Comment: 14+9 pages. Comments welcom
Resource Bounded Immunity and Simplicity
Revisiting the thirty years-old notions of resource-bounded immunity and
simplicity, we investigate the structural characteristics of various immunity
notions: strong immunity, almost immunity, and hyperimmunity as well as their
corresponding simplicity notions. We also study limited immunity and
simplicity, called k-immunity and feasible k-immunity, and their simplicity
notions. Finally, we propose the k-immune hypothesis as a working hypothesis
that guarantees the existence of simple sets in NP.Comment: This is a complete version of the conference paper that appeared in
the Proceedings of the 3rd IFIP International Conference on Theoretical
Computer Science, Kluwer Academic Publishers, pp.81-95, Toulouse, France,
August 23-26, 200
Nondeterministic functions and the existence of optimal proof systems
We provide new characterizations of two previously studied questions on nondeterministic function classes: Q1: Do nondeterministic functions admit efficient deterministic refinements? Q2: Do nondeterministic function classes contain complete functions? We show that Q1 for the class is equivalent to the question whether the standard proof system for SAT is p-optimal, and to the assumption that every optimal proof system is p-optimal. Assuming only the existence of a p-optimal proof system for SAT, we show that every set with an optimal proof system has a p-optimal proof system. Under the latter assumption, we also obtain a positive answer to Q2 for the class . An alternative view on nondeterministic functions is provided by disjoint sets and tuples. We pursue this approach for disjoint -pairs and its generalizations to tuples of sets from and with disjointness conditions of varying strength. In this way, we obtain new characterizations of Q2 for the class . Question Q1 for is equivalent to the question of whether every disjoint -pair is easy to separate. In addition, we characterize this problem by the question of whether every propositional proof system has the effective interpolation property. Again, these interpolation properties are intimately connected to disjoint -pairs, and we show how different interpolation properties can be modeled by -pairs associated with the underlying proof system
An Atypical Survey of Typical-Case Heuristic Algorithms
Heuristic approaches often do so well that they seem to pretty much always
give the right answer. How close can heuristic algorithms get to always giving
the right answer, without inducing seismic complexity-theoretic consequences?
This article first discusses how a series of results by Berman, Buhrman,
Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the
early 1970s through the early 1990s, explicitly or implicitly limited how well
heuristic algorithms can do on NP-hard problems. In particular, many desirable
levels of heuristic success cannot be obtained unless severe, highly unlikely
complexity class collapses occur. Second, we survey work initiated by Goldreich
and Wigderson, who showed how under plausible assumptions deterministic
heuristics for randomized computation can achieve a very high frequency of
correctness. Finally, we consider formal ways in which theory can help explain
the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012
issue of SIGACT New
- …