1,659 research outputs found
Computational Difficulty of Global Variations in the Density Matrix Renormalization Group
The density matrix renormalization group (DMRG) approach is arguably the most
successful method to numerically find ground states of quantum spin chains. It
amounts to iteratively locally optimizing matrix-product states, aiming at
better and better approximating the true ground state. To date, both a proof of
convergence to the globally best approximation and an assessment of its
complexity are lacking. Here we establish a result on the computational
complexity of an approximation with matrix-product states: The surprising
result is that when one globally optimizes over several sites of local
Hamiltonians, avoiding local optima, one encounters in the worst case a
computationally difficult NP-hard problem (hard even in approximation). The
proof exploits a novel way of relating it to binary quadratic programming. We
discuss intriguing ramifications on the difficulty of describing quantum
many-body systems.Comment: 5 pages, 1 figure, RevTeX, final versio
Quantum Interactive Proofs with Competing Provers
This paper studies quantum refereed games, which are quantum interactive
proof systems with two competing provers: one that tries to convince the
verifier to accept and the other that tries to convince the verifier to reject.
We prove that every language having an ordinary quantum interactive proof
system also has a quantum refereed game in which the verifier exchanges just
one round of messages with each prover. A key part of our proof is the fact
that there exists a single quantum measurement that reliably distinguishes
between mixed states chosen arbitrarily from disjoint convex sets having large
minimal trace distance from one another. We also show how to reduce the
probability of error for some classes of quantum refereed games.Comment: 13 pages, to appear in STACS 200
Sampling and Representation Complexity of Revenue Maximization
We consider (approximate) revenue maximization in auctions where the
distribution on input valuations is given via "black box" access to samples
from the distribution. We observe that the number of samples required -- the
sample complexity -- is tightly related to the representation complexity of an
approximately revenue-maximizing auction. Our main results are upper bounds and
an exponential lower bound on these complexities
Maximizing Welfare in Social Networks under a Utility Driven Influence Diffusion Model
Motivated by applications such as viral marketing, the problem of influence
maximization (IM) has been extensively studied in the literature. The goal is
to select a small number of users to adopt an item such that it results in a
large cascade of adoptions by others. Existing works have three key
limitations. (1) They do not account for economic considerations of a user in
buying/adopting items. (2) Most studies on multiple items focus on competition,
with complementary items receiving limited attention. (3) For the network
owner, maximizing social welfare is important to ensure customer loyalty, which
is not addressed in prior work in the IM literature. In this paper, we address
all three limitations and propose a novel model called UIC that combines
utility-driven item adoption with influence propagation over networks. Focusing
on the mutually complementary setting, we formulate the problem of social
welfare maximization in this novel setting. We show that while the objective
function is neither submodular nor supermodular, surprisingly a simple greedy
allocation algorithm achieves a factor of of the optimum
expected social welfare. We develop \textsf{bundleGRD}, a scalable version of
this approximation algorithm, and demonstrate, with comprehensive experiments
on real and synthetic datasets, that it significantly outperforms all
baselines.Comment: 33 page
Parallel Repetition of Entangled Games with Exponential Decay via the Superposed Information Cost
In a two-player game, two cooperating but non communicating players, Alice
and Bob, receive inputs taken from a probability distribution. Each of them
produces an output and they win the game if they satisfy some predicate on
their inputs/outputs. The entangled value of a game is the
maximum probability that Alice and Bob can win the game if they are allowed to
share an entangled state prior to receiving their inputs.
The -fold parallel repetition of consists of instances of
where the players receive all the inputs at the same time and produce all
the outputs at the same time. They win if they win each instance of .
In this paper we show that for any game such that , decreases exponentially in . First, for
any game on the uniform distribution, we show that , where and are the sizes of the input
and output sets. From this result, we show that for any entangled game ,
where is the input distribution of and
. This implies parallel
repetition with exponential decay as long as for
general games. To prove this parallel repetition, we introduce the concept of
\emph{Superposed Information Cost} for entangled games which is inspired from
the information cost used in communication complexity.Comment: In the first version of this paper we presented a different, stronger
Corollary 1 but due to an error in the proof we had to modify it in the
second version. This third version is a minor update. We correct some typos
and re-introduce a proof accidentally commented out in the second versio
Replica Placement on Bounded Treewidth Graphs
We consider the replica placement problem: given a graph with clients and
nodes, place replicas on a minimum set of nodes to serve all the clients; each
client is associated with a request and maximum distance that it can travel to
get served and there is a maximum limit (capacity) on the amount of request a
replica can serve. The problem falls under the general framework of capacitated
set covering. It admits an O(\log n)-approximation and it is NP-hard to
approximate within a factor of . We study the problem in terms of
the treewidth of the graph and present an O(t)-approximation algorithm.Comment: An abridged version of this paper is to appear in the proceedings of
WADS'1
AMS measurements of cosmogenic and supernova-ejected radionuclides in deep-sea sediment cores
Samples of two deep-sea sediment cores from the Indian Ocean are analyzed
with accelerator mass spectrometry (AMS) to search for traces of recent
supernova activity around 2 Myr ago. Here, long-lived radionuclides, which are
synthesized in massive stars and ejected in supernova explosions, namely 26Al,
53Mn and 60Fe, are extracted from the sediment samples. The cosmogenic isotope
10Be, which is mainly produced in the Earths atmosphere, is analyzed for dating
purposes of the marine sediment cores. The first AMS measurement results for
10Be and 26Al are presented, which represent for the first time a detailed
study in the time period of 1.7-3.1 Myr with high time resolution. Our first
results do not support a significant extraterrestrial signal of 26Al above
terrestrial background. However, there is evidence that, like 10Be, 26Al might
be a valuable isotope for dating of deep-sea sediment cores for the past few
million years.Comment: 5 pages, 2 figures, Proceedings of the Heavy Ion Accelerator
Symposium on Fundamental and Applied Science, 2013, will be published by the
EPJ Web of conference
A Characterization of Visibility Graphs for Pseudo-Polygons
In this paper, we give a characterization of the visibility graphs of
pseudo-polygons. We first identify some key combinatorial properties of
pseudo-polygons, and we then give a set of five necessary conditions based off
our identified properties. We then prove that these necessary conditions are
also sufficient via a reduction to a characterization of vertex-edge visibility
graphs given by O'Rourke and Streinu
Limitations to Frechet's Metric Embedding Method
Frechet's classical isometric embedding argument has evolved to become a
major tool in the study of metric spaces. An important example of a Frechet
embedding is Bourgain's embedding. The authors have recently shown that for
every e>0 any n-point metric space contains a subset of size at least n^(1-e)
which embeds into l_2 with distortion O(\log(2/e) /e). The embedding we used is
non-Frechet, and the purpose of this note is to show that this is not
coincidental. Specifically, for every e>0, we construct arbitrarily large
n-point metric spaces, such that the distortion of any Frechet embedding into
l_p on subsets of size at least n^{1/2 + e} is \Omega((\log n)^{1/p}).Comment: 10 pages, 1 figur
Approximating the minimum directed tree cover
Given a directed graph with non negative cost on the arcs, a directed
tree cover of is a rooted directed tree such that either head or tail (or
both of them) of every arc in is touched by . The minimum directed tree
cover problem (DTCP) is to find a directed tree cover of minimum cost. The
problem is known to be -hard. In this paper, we show that the weighted Set
Cover Problem (SCP) is a special case of DTCP. Hence, one can expect at best to
approximate DTCP with the same ratio as for SCP. We show that this expectation
can be satisfied in some way by designing a purely combinatorial approximation
algorithm for the DTCP and proving that the approximation ratio of the
algorithm is with is the maximum outgoing degree of
the nodes in .Comment: 13 page
- …