103,216 research outputs found
Integer priority queues with decrease key in constant time and the single source shortest paths problem
AbstractWe consider Fibonacci heap style integer priority queues supporting find-min, insert, and decrease key operations in constant time. We present a deterministic linear space solution that with n integer keys supports delete in O(loglogn) time. If the integers are in the range [0,N), we can also support delete in O(loglogN) time.Even for the special case of monotone priority queues, where the minimum has to be non-decreasing, the best previous bounds on delete were O((logn)1/(3−ε)) and O((logN)1/(4−ε)). These previous bounds used both randomization and amortization. Our new bounds are deterministic, worst-case, with no restriction to monotonicity, and exponentially faster.As a classical application, for a directed graph with n nodes and m edges with non-negative integer weights, we get single source shortest paths in O(m+nloglogn) time, or O(m+nloglogC) if C is the maximal edge weight. The latter solves an open problem of Ahuja, Mehlhorn, Orlin, and Tarjan from 1990
Optimal lower bounds for universal relation, and for samplers and finding duplicates in streams
In the communication problem (universal relation) [KRW95],
Alice and Bob respectively receive with the promise that
. The last player to receive a message must output an index such
that . We prove that the randomized one-way communication
complexity of this problem in the public coin model is exactly
for failure
probability . Our lower bound holds even if promised
. As a corollary, we obtain
optimal lower bounds for -sampling in strict turnstile streams for
, as well as for the problem of finding duplicates in a stream. Our
lower bounds do not need to use large weights, and hold even if promised
at all points in the stream.
We give two different proofs of our main result. The first proof demonstrates
that any algorithm solving sampling problems in turnstile streams
in low memory can be used to encode subsets of of certain sizes into a
number of bits below the information theoretic minimum. Our encoder makes
adaptive queries to throughout its execution, but done carefully
so as to not violate correctness. This is accomplished by injecting random
noise into the encoder's interactions with , which is loosely
motivated by techniques in differential privacy. Our second proof is via a
novel randomized reduction from Augmented Indexing [MNSW98] which needs to
interact with adaptively. To handle the adaptivity we identify
certain likely interaction patterns and union bound over them to guarantee
correct interaction on all of them. To guarantee correctness, it is important
that the interaction hides some of its randomness from in the
reduction.Comment: merge of arXiv:1703.08139 and of work of Kapralov, Woodruff, and
Yahyazade
On the similarities between generalized rank and Hamming weights and their applications to network coding
Rank weights and generalized rank weights have been proven to characterize
error and erasure correction, and information leakage in linear network coding,
in the same way as Hamming weights and generalized Hamming weights describe
classical error and erasure correction, and information leakage in wire-tap
channels of type II and code-based secret sharing. Although many similarities
between both cases have been established and proven in the literature, many
other known results in the Hamming case, such as bounds or characterizations of
weight-preserving maps, have not been translated to the rank case yet, or in
some cases have been proven after developing a different machinery. The aim of
this paper is to further relate both weights and generalized weights, show that
the results and proofs in both cases are usually essentially the same, and see
the significance of these similarities in network coding. Some of the new
results in the rank case also have new consequences in the Hamming case
B-LOG: A branch and bound methodology for the parallel execution of logic programs
We propose a computational methodology -"B-LOG"-, which offers the potential for an effective implementation of Logic Programming in a parallel computer. We also propose a weighting scheme to guide the search process through the graph and we apply the concepts of parallel "branch and bound" algorithms in order to perform a "best-first" search using an information theoretic bound. The concept of "session" is used to speed up the search process in a succession of similar queries. Within a session, we strongly modify the bounds in a local database, while bounds kept in a global database are weakly modified to provide a better initial condition for other sessions. We
also propose an implementation scheme based on a database
machine using "semantic paging", and the "B-LOG processor" based on a scoreboard driven controller
Codes with Locality for Two Erasures
In this paper, we study codes with locality that can recover from two
erasures via a sequence of two local, parity-check computations. By a local
parity-check computation, we mean recovery via a single parity-check equation
associated to small Hamming weight. Earlier approaches considered recovery in
parallel; the sequential approach allows us to potentially construct codes with
improved minimum distance. These codes, which we refer to as locally
2-reconstructible codes, are a natural generalization along one direction, of
codes with all-symbol locality introduced by Gopalan \textit{et al}, in which
recovery from a single erasure is considered. By studying the Generalized
Hamming Weights of the dual code, we derive upper bounds on the minimum
distance of locally 2-reconstructible codes and provide constructions for a
family of codes based on Tur\'an graphs, that are optimal with respect to this
bound. The minimum distance bound derived here is universal in the sense that
no code which permits all-symbol local recovery from erasures can have
larger minimum distance regardless of approach adopted. Our approach also leads
to a new bound on the minimum distance of codes with all-symbol locality for
the single-erasure case.Comment: 14 pages, 3 figures, Updated for improved readabilit
Tight Bounds for Gomory-Hu-like Cut Counting
By a classical result of Gomory and Hu (1961), in every edge-weighted graph
, the minimum -cut values, when ranging over all ,
take at most distinct values. That is, these instances
exhibit redundancy factor . They further showed how to construct
from a tree that stores all minimum -cut values. Motivated
by this result, we obtain tight bounds for the redundancy factor of several
generalizations of the minimum -cut problem.
1. Group-Cut: Consider the minimum -cut, ranging over all subsets
of given sizes and . The redundancy
factor is .
2. Multiway-Cut: Consider the minimum cut separating every two vertices of
, ranging over all subsets of a given size . The
redundancy factor is .
3. Multicut: Consider the minimum cut separating every demand-pair in
, ranging over collections of demand pairs. The
redundancy factor is . This result is a bit surprising, as
the redundancy factor is much larger than in the first two problems.
A natural application of these bounds is to construct small data structures
that stores all relevant cut values, like the Gomory-Hu tree. We initiate this
direction by giving some upper and lower bounds.Comment: This version contains additional references to previous work (which
have some overlap with our results), see Bibliographic Update 1.
Relaxation Bounds on the Minimum Pseudo-Weight of Linear Block Codes
Just as the Hamming weight spectrum of a linear block code sheds light on the
performance of a maximum likelihood decoder, the pseudo-weight spectrum
provides insight into the performance of a linear programming decoder. Using
properties of polyhedral cones, we find the pseudo-weight spectrum of some
short codes. We also present two general lower bounds on the minimum
pseudo-weight. The first bound is based on the column weight of the
parity-check matrix. The second bound is computed by solving an optimization
problem. In some cases, this bound is more tractable to compute than previously
known bounds and thus can be applied to longer codes.Comment: To appear in the proceedings of the 2005 IEEE International Symposium
on Information Theory, Adelaide, Australia, September 4-9, 200
- …