161 research outputs found
Quantum Private Information Retrieval with Sublinear Communication Complexity
This note presents a quantum protocol for private information retrieval, in
the single-server case and with information-theoretical privacy, that has
O(\sqrt{n})-qubit communication complexity, where n denotes the size of the
database. In comparison, it is known that any classical protocol must use
\Omega(n) bits of communication in this setting.Comment: 4 page
Some Applications of Coding Theory in Computational Complexity
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs
Distributed PCP Theorems for Hardness of Approximation in P
We present a new distributed model of probabilistically checkable proofs
(PCP). A satisfying assignment to a CNF formula is
shared between two parties, where Alice knows , Bob knows
, and both parties know . The goal is to have
Alice and Bob jointly write a PCP that satisfies , while
exchanging little or no information. Unfortunately, this model as-is does not
allow for nontrivial query complexity. Instead, we focus on a non-deterministic
variant, where the players are helped by Merlin, a third party who knows all of
.
Using our framework, we obtain, for the first time, PCP-like reductions from
the Strong Exponential Time Hypothesis (SETH) to approximation problems in P.
In particular, under SETH we show that there are no truly-subquadratic
approximation algorithms for Bichromatic Maximum Inner Product over
{0,1}-vectors, Bichromatic LCS Closest Pair over permutations, Approximate
Regular Expression Matching, and Diameter in Product Metric. All our
inapproximability factors are nearly-tight. In particular, for the first two
problems we obtain nearly-polynomial factors of ; only
-factor lower bounds (under SETH) were known before
On Quantum Advantage in Information Theoretic Single-Server PIR
In (single-server) Private Information Retrieval (PIR), a server holds a
large database of size , and a client holds an index and
wishes to retrieve without revealing to the server. It is well
known that information theoretic privacy even against an `honest but curious'
server requires communication complexity. This is true even if
quantum communication is allowed and is due to the ability of such an
adversarial server to execute the protocol on a superposition of databases
instead of on a specific database (`input purification attack'). Nevertheless,
there have been some proposals of protocols that achieve sub-linear
communication and appear to provide some notion of privacy. Most notably, a
protocol due to Le Gall (ToC 2012) with communication complexity ,
and a protocol by Kerenidis et al. (QIC 2016) with communication complexity
, and shared entanglement.
We show that, in a sense, input purification is the only potent adversarial
strategy, and protocols such as the two protocols above are secure in a
restricted variant of the quantum honest but curious (a.k.a specious) model.
More explicitly, we propose a restricted privacy notion called \emph{anchored
privacy}, where the adversary is forced to execute on a classical database
(i.e. the execution is anchored to a classical database). We show that for
measurement-free protocols, anchored security against honest adversarial
servers implies anchored privacy even against specious adversaries.
Finally, we prove that even with (unlimited) pre-shared entanglement it is
impossible to achieve security in the standard specious model with sub-linear
communication, thus further substantiating the necessity of our relaxation.
This lower bound may be of independent interest (in particular recalling that
PIR is a special case of Fully Homomorphic Encryption)
Sublinear Computation Paradigm
This open access book gives an overview of cutting-edge work on a new paradigm called the “sublinear computation paradigm,” which was proposed in the large multiyear academic research project “Foundations of Innovative Algorithms for Big Data.” That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as “fast,” but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms
- …