13,232 research outputs found
Smooth heaps and a dual view of self-adjusting data structures
We present a new connection between self-adjusting binary search trees (BSTs)
and heaps, two fundamental, extensively studied, and practically relevant
families of data structures. Roughly speaking, we map an arbitrary heap
algorithm within a natural model, to a corresponding BST algorithm with the
same cost on a dual sequence of operations (i.e. the same sequence with the
roles of time and key-space switched). This is the first general transformation
between the two families of data structures.
There is a rich theory of dynamic optimality for BSTs (i.e. the theory of
competitiveness between BST algorithms). The lack of an analogous theory for
heaps has been noted in the literature. Through our connection, we transfer all
instance-specific lower bounds known for BSTs to a general model of heaps,
initiating a theory of dynamic optimality for heaps.
On the algorithmic side, we obtain a new, simple and efficient heap
algorithm, which we call the smooth heap. We show the smooth heap to be the
heap-counterpart of Greedy, the BST algorithm with the strongest proven and
conjectured properties from the literature, widely believed to be
instance-optimal. Assuming the optimality of Greedy, the smooth heap is also
optimal within our model of heap algorithms. As corollaries of results known
for Greedy, we obtain instance-specific upper bounds for the smooth heap, with
applications in adaptive sorting.
Intriguingly, the smooth heap, although derived from a non-practical BST
algorithm, is simple and easy to implement (e.g. it stores no auxiliary data
besides the keys and tree pointers). It can be seen as a variation on the
popular pairing heap data structure, extending it with a "power-of-two-choices"
type of heuristic.Comment: Presented at STOC 2018, light revision, additional figure
Space-Efficient Parallel Algorithms for Combinatorial Search Problems
We present space-efficient parallel strategies for two fundamental
combinatorial search problems, namely, backtrack search and branch-and-bound,
both involving the visit of an -node tree of height under the assumption
that a node can be accessed only through its father or its children. For both
problems we propose efficient algorithms that run on a -processor
distributed-memory machine. For backtrack search, we give a deterministic
algorithm running in time, and a Las Vegas algorithm requiring
optimal time, with high probability. Building on the backtrack
search algorithm, we also derive a Las Vegas algorithm for branch-and-bound
which runs in time, with high probability. A
remarkable feature of our algorithms is the use of only constant space per
processor, which constitutes a significant improvement upon previous algorithms
whose space requirements per processor depend on the (possibly huge) tree to be
explored.Comment: Extended version of the paper in the Proc. of 38th International
Symposium on Mathematical Foundations of Computer Science (MFCS
Show Me the Money: Dynamic Recommendations for Revenue Maximization
Recommender Systems (RS) play a vital role in applications such as e-commerce
and on-demand content streaming. Research on RS has mainly focused on the
customer perspective, i.e., accurate prediction of user preferences and
maximization of user utilities. As a result, most existing techniques are not
explicitly built for revenue maximization, the primary business goal of
enterprises. In this work, we explore and exploit a novel connection between RS
and the profitability of a business. As recommendations can be seen as an
information channel between a business and its customers, it is interesting and
important to investigate how to make strategic dynamic recommendations leading
to maximum possible revenue. To this end, we propose a novel \model that takes
into account a variety of factors including prices, valuations, saturation
effects, and competition amongst products. Under this model, we study the
problem of finding revenue-maximizing recommendation strategies over a finite
time horizon. We show that this problem is NP-hard, but approximation
guarantees can be obtained for a slightly relaxed version, by establishing an
elegant connection to matroid theory. Given the prohibitively high complexity
of the approximation algorithm, we also design intelligent heuristics for the
original problem. Finally, we conduct extensive experiments on two real and
synthetic datasets and demonstrate the efficiency, scalability, and
effectiveness our algorithms, and that they significantly outperform several
intuitive baselines.Comment: Conference version published in PVLDB 7(14). To be presented in the
VLDB Conference 2015, in Hawaii. This version gives a detailed submodularity
proo
Incremental Cycle Detection, Topological Ordering, and Strong Component Maintenance
We present two on-line algorithms for maintaining a topological order of a
directed -vertex acyclic graph as arcs are added, and detecting a cycle when
one is created. Our first algorithm handles arc additions in
time. For sparse graphs (), this bound improves the best previous
bound by a logarithmic factor, and is tight to within a constant factor among
algorithms satisfying a natural {\em locality} property. Our second algorithm
handles an arbitrary sequence of arc additions in time. For
sufficiently dense graphs, this bound improves the best previous bound by a
polynomial factor. Our bound may be far from tight: we show that the algorithm
can take time by relating its performance to a
generalization of the -levels problem of combinatorial geometry. A
completely different algorithm running in time was given
recently by Bender, Fineman, and Gilbert. We extend both of our algorithms to
the maintenance of strong components, without affecting the asymptotic time
bounds.Comment: 31 page
Wireless Network Simplification: the Gaussian N-Relay Diamond Network
We consider the Gaussian N-relay diamond network, where a source wants to
communicate to a destination node through a layer of N-relay nodes. We
investigate the following question: what fraction of the capacity can we
maintain by using only k out of the N available relays? We show that
independent of the channel configurations and the operating SNR, we can always
find a subset of k relays which alone provide a rate (kC/(k+1))-G, where C is
the information theoretic cutset upper bound on the capacity of the whole
network and G is a constant that depends only on N and k (logarithmic in N and
linear in k). In particular, for k = 1, this means that half of the capacity of
any N-relay diamond network can be approximately achieved by routing
information over a single relay. We also show that this fraction is tight:
there are configurations of the N-relay diamond network where every subset of k
relays alone can at most provide approximately a fraction k/(k+1) of the total
capacity. These high-capacity k-relay subnetworks can be also discovered
efficiently. We propose an algorithm that computes a constant gap approximation
to the capacity of the Gaussian N-relay diamond network in O(N log N) running
time and discovers a high-capacity k-relay subnetwork in O(kN) running time.
This result also provides a new approximation to the capacity of the Gaussian
N-relay diamond network which is hybrid in nature: it has both multiplicative
and additive gaps. In the intermediate SNR regime, this hybrid approximation is
tighter than existing purely additive or purely multiplicative approximations
to the capacity of this network.Comment: Submitted to Transactions on Information Theory in October 2012. The
new version includes discussions on the algorithmic complexity of discovering
a high-capacity subnetwork and on the performance of amplify-and-forwar
- …