4,141 research outputs found
Fast and compact self-stabilizing verification, computation, and fault detection of an MST
This paper demonstrates the usefulness of distributed local verification of
proofs, as a tool for the design of self-stabilizing algorithms.In particular,
it introduces a somewhat generalized notion of distributed local proofs, and
utilizes it for improving the time complexity significantly, while maintaining
space optimality. As a result, we show that optimizing the memory size carries
at most a small cost in terms of time, in the context of Minimum Spanning Tree
(MST). That is, we present algorithms that are both time and space efficient
for both constructing an MST and for verifying it.This involves several parts
that may be considered contributions in themselves.First, we generalize the
notion of local proofs, trading off the time complexity for memory efficiency.
This adds a dimension to the study of distributed local proofs, which has been
gaining attention recently. Specifically, we design a (self-stabilizing) proof
labeling scheme which is memory optimal (i.e., bits per node), and
whose time complexity is in synchronous networks, or time in asynchronous ones, where is the maximum degree of
nodes. This answers an open problem posed by Awerbuch and Varghese (FOCS 1991).
We also show that time is necessary, even in synchronous
networks. Another property is that if faults occurred, then, within the
requireddetection time above, they are detected by some node in the locality of each of the faults.Second, we show how to enhance a known
transformer that makes input/output algorithms self-stabilizing. It now takes
as input an efficient construction algorithm and an efficient self-stabilizing
proof labeling scheme, and produces an efficient self-stabilizing algorithm.
When used for MST, the transformer produces a memory optimal self-stabilizing
algorithm, whose time complexity, namely, , is significantly better even
than that of previous algorithms. (The time complexity of previous MST
algorithms that used memory bits per node was , and
the time for optimal space algorithms was .) Inherited from our proof
labelling scheme, our self-stabilising MST construction algorithm also has the
following two properties: (1) if faults occur after the construction ended,
then they are detected by some nodes within time in synchronous
networks, or within time in asynchronous ones, and (2) if
faults occurred, then, within the required detection time above, they are
detected within the locality of each of the faults. We also show
how to improve the above two properties, at the expense of some increase in the
memory
Smooth and Strong PCPs
Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs:
- A PCP is strong if it rejects an alleged proof of a correct claim with probability proportional to its distance from some correct proof of that claim.
- A PCP is smooth if each location in a proof is queried with equal probability.
We prove that all sets in NP have PCPs that are both smooth and strong, are of polynomial length, and can be verified based on a constant number of queries. This is achieved by following the proof of the PCP theorem of Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), providing a stronger analysis of the Hadamard and Reed - Muller based PCPs and a refined PCP composition theorem. In fact, we show that any set in NP has a smooth strong canonical PCP of Proximity (PCPP), meaning that there is an efficiently computable bijection of NP witnesses to correct proofs. This improves on the recent construction of Dinur, Gur and Goldreich (ITCS, 2019) of PCPPs that are strong canonical but inherently non-smooth.
Our result implies the hardness of approximating the satisfiability of "stable" 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (SODA, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems
Combinatorial Assortment Optimization
Assortment optimization refers to the problem of designing a slate of
products to offer potential customers, such as stocking the shelves in a
convenience store. The price of each product is fixed in advance, and a
probabilistic choice function describes which product a customer will choose
from any given subset. We introduce the combinatorial assortment problem, where
each customer may select a bundle of products. We consider a model of consumer
choice where the relative value of different bundles is described by a
valuation function, while individual customers may differ in their absolute
willingness to pay, and study the complexity of the resulting optimization
problem. We show that any sub-polynomial approximation to the problem requires
exponentially many demand queries when the valuation function is XOS, and that
no FPTAS exists even for succinctly-representable submodular valuations. On the
positive side, we show how to obtain constant approximations under a
"well-priced" condition, where each product's price is sufficiently high. We
also provide an exact algorithm for -additive valuations, and show how to
extend our results to a learning setting where the seller must infer the
customers' preferences from their purchasing behavior
Efficient Algorithms for Privately Releasing Marginals via Convex Relaxations
Consider a database of people, each represented by a bit-string of length
corresponding to the setting of binary attributes. A -way marginal
query is specified by a subset of attributes, and a -dimensional
binary vector specifying their values. The result for this query is a
count of the number of people in the database whose attribute vector restricted
to agrees with .
Privately releasing approximate answers to a set of -way marginal queries
is one of the most important and well-motivated problems in differential
privacy. Information theoretically, the error complexity of marginal queries is
well-understood: the per-query additive error is known to be at least
and at most
. However, no polynomial
time algorithm with error complexity as low as the information theoretic upper
bound is known for small . In this work we present a polynomial time
algorithm that, for any distribution on marginal queries, achieves average
error at most . This error
bound is as good as the best known information theoretic upper bounds for
. This bound is an improvement over previous work on efficiently releasing
marginals when is small and when error is desirable. Using private
boosting we are also able to give nearly matching worst-case error bounds.
Our algorithms are based on the geometric techniques of Nikolov, Talwar, and
Zhang. The main new ingredients are convex relaxations and careful use of the
Frank-Wolfe algorithm for constrained convex minimization. To design our
relaxations, we rely on the Grothendieck inequality from functional analysis
Information Gains from Cosmological Probes
In light of the growing number of cosmological observations, it is important
to develop versatile tools to quantify the constraining power and consistency
of cosmological probes. Originally motivated from information theory, we use
the relative entropy to compute the information gained by Bayesian updates in
units of bits. This measure quantifies both the improvement in precision and
the 'surprise', i.e. the tension arising from shifts in central values. Our
starting point is a WMAP9 prior which we update with observations of the
distance ladder, supernovae (SNe), baryon acoustic oscillations (BAO), and weak
lensing as well as the 2015 Planck release. We consider the parameters of the
flat CDM concordance model and some of its extensions which include
curvature and Dark Energy equation of state parameter . We find that,
relative to WMAP9 and within these model spaces, the probes that have provided
the greatest gains are Planck (10 bits), followed by BAO surveys (5.1 bits) and
SNe experiments (3.1 bits). The other cosmological probes, including weak
lensing (1.7 bits) and {} measures (1.7 bits), have contributed
information but at a lower level. Furthermore, we do not find any significant
surprise when updating the constraints of WMAP9 with any of the other
experiments, meaning that they are consistent with WMAP9. However, when we
choose Planck15 as the prior, we find that, accounting for the full
multi-dimensionality of the parameter space, the weak lensing measurements of
CFHTLenS produce a large surprise of 4.4 bits which is statistically
significant at the 8 level. We discuss how the relative entropy
provides a versatile and robust framework to compare cosmological probes in the
context of current and future surveys.Comment: 26 pages, 5 figure
- …