15,152 research outputs found
The Lock-free -LSM Relaxed Priority Queue
Priority queues are data structures which store keys in an ordered fashion to
allow efficient access to the minimal (maximal) key. Priority queues are
essential for many applications, e.g., Dijkstra's single-source shortest path
algorithm, branch-and-bound algorithms, and prioritized schedulers.
Efficient multiprocessor computing requires implementations of basic data
structures that can be used concurrently and scale to large numbers of threads
and cores. Lock-free data structures promise superior scalability by avoiding
blocking synchronization primitives, but the \emph{delete-min} operation is an
inherent scalability bottleneck in concurrent priority queues. Recent work has
focused on alleviating this obstacle either by batching operations, or by
relaxing the requirements to the \emph{delete-min} operation.
We present a new, lock-free priority queue that relaxes the \emph{delete-min}
operation so that it is allowed to delete \emph{any} of the smallest
keys, where is a runtime configurable parameter. Additionally, the
behavior is identical to a non-relaxed priority queue for items added and
removed by the same thread. The priority queue is built from a logarithmic
number of sorted arrays in a way similar to log-structured merge-trees. We
experimentally compare our priority queue to recent state-of-the-art lock-free
priority queues, both with relaxed and non-relaxed semantics, showing high
performance and good scalability of our approach.Comment: Short version as ACM PPoPP'15 poste
Selectable Heaps and Their Application to Lazy Search Trees
We show the O(log n) time extract minimum function of efficient priority queues can be generalized to the extraction of the k smallest elements in O(k log(n/k)) time. We first show the heap-ordered tree selection of Kaplan et al. can be applied on the heap-ordered trees of the classic Fibonacci heap to support the extraction in O(k \log(n/k)) amortized time. We then show selection is possible in a priority queue with optimal worst-case guarantees by applying heap-ordered tree selection on Brodal queues, supporting the operation in O(k log(n/k)) worst-case time.
Via a reduction from the multiple selection problem, Ω(k log(n/k)) time is necessary.
We then apply the result to the lazy search trees of Sandlund & Wild, creating a new interval data structure based on selectable heaps. This gives optimal O(B+n) lazy search tree performance, lowering insertion complexity into a gap Δi to O(log(n/|Δi|))$ time. An O(1)-time merge operation is also made possible under certain conditions. If Brodal queues are used, all runtimes of the lazy search tree can be made worst-case. The presented data structure uses soft heaps of Chazelle, biased search trees, and efficient priority queues in a non-trivial way, approaching the theoretically-best data structure for ordered data
Exploiting non-constant safe memory in resilient algorithms and data structures
We extend the Faulty RAM model by Finocchi and Italiano (2008) by adding a
safe memory of arbitrary size , and we then derive tradeoffs between the
performance of resilient algorithmic techniques and the size of the safe
memory. Let and denote, respectively, the maximum amount of
faults which can happen during the execution of an algorithm and the actual
number of occurred faults, with . We propose a resilient
algorithm for sorting entries which requires time and uses safe memory words. Our
algorithm outperforms previous resilient sorting algorithms which do not
exploit the available safe memory and require time. Finally, we exploit our sorting algorithm for
deriving a resilient priority queue. Our implementation uses safe
memory words and faulty memory words for storing keys, and
requires amortized time for each insert and
deletemin operation. Our resilient priority queue improves the amortized time required by the state of the art.Comment: To appear in Theoretical Computer Science, 201
An Efficient Implementation of the Robust Tabu Search Heuristic for Sparse Quadratic Assignment Problems
We propose and develop an efficient implementation of the robust tabu search
heuristic for sparse quadratic assignment problems. The traditional
implementation of the heuristic applicable to all quadratic assignment problems
is of O(N^2) complexity per iteration for problems of size N. Using multiple
priority queues to determine the next best move instead of scanning all
possible moves, and using adjacency lists to minimize the operations needed to
determine the cost of moves, we reduce the asymptotic complexity per iteration
to O(N log N ). For practical sized problems, the complexity is O(N)
Queues and risk models with simultaneous arrivals
We focus on a particular connection between queueing and risk models in a
multi-dimensional setting. We first consider the joint workload process in a
queueing model with parallel queues and simultaneous arrivals at the queues.
For the case that the service times are ordered (from largest in the first
queue to smallest in the last queue) we obtain the Laplace-Stieltjes transform
of the joint stationary workload distribution. Using a multivariate duality
argument between queueing and risk models, this also gives the Laplace
transform of the survival probability of all books in a multivariate risk model
with simultaneous claim arrivals and the same ordering between claim sizes.
Other features of the paper include a stochastic decomposition result for the
workload vector, and an outline how the two-dimensional risk model with a
general two-dimensional claim size distribution (hence without ordering of
claim sizes) is related to a known Riemann boundary value problem
- …