10,506 research outputs found
A Compression Technique Exploiting References for Data Synchronization Services
Department of Computer Science and EngineeringIn a variety of network applications, there exists significant amount of shared data between two end hosts. Examples include data synchronization services that replicate data from one node to another. Given that shared data may have high correlation with new data to transmit, we question how such shared data can be best utilized to improve the efficiency of data transmission. To answer this, we develop an encoding technique, SyncCoding, that effectively replaces bit sequences of the data to be transmitted with the pointers to their matching bit sequences in the shared data so called references. By doing so, SyncCoding can reduce data traffic, speed up data transmission, and save energy consumption for transmission. Our evaluations of SyncCoding implemented in Linux show that it outperforms existing popular encoding techniques, Brotli, LZMA, Deflate, and Deduplication. The gains of SyncCoding over those techniques in the perspective of data size after compression in a cloud storage scenario are about 12.4%, 20.1%, 29.9%, and 61.2%, and are about 78.3%, 79.6%, 86.1%, and 92.9% in a web browsing scenario, respectively.ope
A Stable Marriage Requires Communication
The Gale-Shapley algorithm for the Stable Marriage Problem is known to take
steps to find a stable marriage in the worst case, but only
steps in the average case (with women and men). In
1976, Knuth asked whether the worst-case running time can be improved in a
model of computation that does not require sequential access to the whole
input. A partial negative answer was given by Ng and Hirschberg, who showed
that queries are required in a model that allows certain natural
random-access queries to the participants' preferences. A significantly more
general - albeit slightly weaker - lower bound follows from Segal's general
analysis of communication complexity, namely that Boolean queries
are required in order to find a stable marriage, regardless of the set of
allowed Boolean queries.
Using a reduction to the communication complexity of the disjointness
problem, we give a far simpler, yet significantly more powerful argument
showing that Boolean queries of any type are indeed required for
finding a stable - or even an approximately stable - marriage. Notably, unlike
Segal's lower bound, our lower bound generalizes also to (A) randomized
algorithms, (B) allowing arbitrary separate preprocessing of the women's
preferences profile and of the men's preferences profile, (C) several variants
of the basic problem, such as whether a given pair is married in every/some
stable marriage, and (D) determining whether a proposed marriage is stable or
far from stable. In order to analyze "approximately stable" marriages, we
introduce the notion of "distance to stability" and provide an efficient
algorithm for its computation
Understanding the complexity of #SAT using knowledge compilation
Two main techniques have been used so far to solve the #P-hard problem #SAT.
The first one, used in practice, is based on an extension of DPLL for model
counting called exhaustive DPLL. The second approach, more theoretical,
exploits the structure of the input to compute the number of satisfying
assignments by usually using a dynamic programming scheme on a decomposition of
the formula. In this paper, we make a first step toward the separation of these
two techniques by exhibiting a family of formulas that can be solved in
polynomial time with the first technique but needs an exponential time with the
second one. We show this by observing that both techniques implicitely
construct a very specific boolean circuit equivalent to the input formula. We
then show that every beta-acyclic formula can be represented by a polynomial
size circuit corresponding to the first method and exhibit a family of
beta-acyclic formulas which cannot be represented by polynomial size circuits
corresponding to the second method. This result shed a new light on the
complexity of #SAT and related problems on beta-acyclic formulas. As a
byproduct, we give new handy tools to design algorithms on beta-acyclic
hypergraphs
The Garden Hose Complexity for the Equality Function
The garden hose complexity is a new communication complexity introduced by H.
Buhrman, S. Fehr, C. Schaffner and F. Speelman [BFSS13] to analyze
position-based cryptography protocols in the quantum setting. We focus on the
garden hose complexity of the equality function, and improve on the bounds of
O. Margalit and A. Matsliah[MM12] with the help of a new approach and of our
handmade simulated annealing based solver. We have also found beautiful
symmetries of the solutions that have lead us to develop the notion of garden
hose permutation groups. Then, exploiting this new concept, we get even
further, although several interesting open problems remain.Comment: 16 page
Simple and Nearly Optimal Polynomial Root-finding by Means of Root Radii Approximation
We propose a new simple but nearly optimal algorithm for the approximation of
all sufficiently well isolated complex roots and root clusters of a univariate
polynomial. Quite typically the known root-finders at first compute some crude
but reasonably good approximations to well-conditioned roots (that is, those
isolated from the other roots) and then refine the approximations very fast, by
using Boolean time which is nearly optimal, up to a polylogarithmic factor. By
combining and extending some old root-finding techniques, the geometry of the
complex plane, and randomized parametrization, we accelerate the initial stage
of obtaining crude to all well-conditioned simple and multiple roots as well as
isolated root clusters. Our algorithm performs this stage at a Boolean cost
dominated by the nearly optimal cost of subsequent refinement of these
approximations, which we can perform concurrently, with minimum processor
communication and synchronization. Our techniques are quite simple and
elementary; their power and application range may increase in their combination
with the known efficient root-finding methods.Comment: 12 pages, 1 figur
On the Computational Power of DNA Annealing and Ligation
In [20] it was shown that the DNA primitives of Separate,
Merge, and Amplify were not sufficiently powerful to invert
functions defined by circuits in linear time. Dan Boneh et
al [4] show that the addition of a ligation primitive, Append, provides the missing power. The question becomes, "How powerful is ligation? Are Separate, Merge, and Amplify
necessary at all?" This paper proposes to informally explore
the power of annealing and ligation for DNA computation.
We conclude, in fact, that annealing and ligation alone are
theoretically capable of universal computation
- …