8,248 research outputs found
Engineering Aggregation Operators for Relational In-Memory Database Systems
In this thesis we study the design and implementation of Aggregation operators in the context of relational in-memory database systems. In particular, we identify and address the following challenges: cache-efficiency, CPU-friendliness, parallelism within and across processors, robust handling of skewed data, adaptive processing, processing with constrained memory, and integration with modern database architectures. Our resulting algorithm outperforms the state-of-the-art by up to 3.7x
Econometric methods for research in education
This paper reviews some of the econometric methods that have been used in the economics of education. The focus is on understanding how the assumptions made to justify and implement such methods relate to the underlying economic model and the interpretation of the results. We start by considering the estimation of the returns to education both within the context of a dynamic discrete choice model inspired byWillis and Rosen (1979) and in the context of the Mincer model. We discuss the relationship between the econometric assumptions and economic behaviour. We then discuss methods that have been used in the context of assessing the impact of education quality, the teacher contribution to pupils' achievement and the effect of school quality on housing prices. In the process we also provide a summary of some of the main results in this literature.
Parallel Working-Set Search Structures
In this paper we present two versions of a parallel working-set map on p
processors that supports searches, insertions and deletions. In both versions,
the total work of all operations when the map has size at least p is bounded by
the working-set bound, i.e., the cost of an item depends on how recently it was
accessed (for some linearization): accessing an item in the map with recency r
takes O(1+log r) work. In the simpler version each map operation has O((log
p)^2+log n) span (where n is the maximum size of the map). In the pipelined
version each map operation on an item with recency r has O((log p)^2+log r)
span. (Operations in parallel may have overlapping span; span is additive only
for operations in sequence.)
Both data structures are designed to be used by a dynamic multithreading
parallel program that at each step executes a unit-time instruction or makes a
data structure call. To achieve the stated bounds, the pipelined data structure
requires a weak-priority scheduler, which supports a limited form of 2-level
prioritization. At the end we explain how the results translate to practical
implementations using work-stealing schedulers.
To the best of our knowledge, this is the first parallel implementation of a
self-adjusting search structure where the cost of an operation adapts to the
access sequence. A corollary of the working-set bound is that it achieves work
static optimality: the total work is bounded by the access costs in an optimal
static search tree.Comment: Authors' version of a paper accepted to SPAA 201
Algorithmic ramifications of prefetching in memory hierarchy
External Memory models, most notable being the I-O Model [3], capture the effects of memory hierarchy and aid in algorithm design. More than a decade of architectural advancements have led to new features not captured in the I-O model - most notably the prefetching capability.
We propose a relatively simple Prefetch model that incorporates data prefetching in the traditional I-O models and show how to design
algorithms that can attain close to peak memory bandwidth. Unlike (the inverse of) memory latency, the memory bandwidth is much closer to the
processing speed, thereby, intelligent use of prefetching can considerably mitigate the I-O bottleneck. For some fundamental problems, our algorithms attain running times approaching that of the idealized Random Access Machines under reasonable assumptions. Our work also explains
the significantly superior performance of the I-O efficient algorithms in systems that support prefetching compared to ones that do not
Energy-Efficient Algorithms
We initiate the systematic study of the energy complexity of algorithms (in
addition to time and space complexity) based on Landauer's Principle in
physics, which gives a lower bound on the amount of energy a system must
dissipate if it destroys information. We propose energy-aware variations of
three standard models of computation: circuit RAM, word RAM, and
transdichotomous RAM. On top of these models, we build familiar high-level
primitives such as control logic, memory allocation, and garbage collection
with zero energy complexity and only constant-factor overheads in space and
time complexity, enabling simple expression of energy-efficient algorithms. We
analyze several classic algorithms in our models and develop low-energy
variations: comparison sort, insertion sort, counting sort, breadth-first
search, Bellman-Ford, Floyd-Warshall, matrix all-pairs shortest paths, AVL
trees, binary heaps, and dynamic arrays. We explore the time/space/energy
trade-off and develop several general techniques for analyzing algorithms and
reducing their energy complexity. These results lay a theoretical foundation
for a new field of semi-reversible computing and provide a new framework for
the investigation of algorithms.Comment: 40 pages, 8 pdf figures, full version of work published in ITCS 201
Parallel resampling in the particle filter
Modern parallel computing devices, such as the graphics processing unit
(GPU), have gained significant traction in scientific and statistical
computing. They are particularly well-suited to data-parallel algorithms such
as the particle filter, or more generally Sequential Monte Carlo (SMC), which
are increasingly used in statistical inference. SMC methods carry a set of
weighted particles through repeated propagation, weighting and resampling
steps. The propagation and weighting steps are straightforward to parallelise,
as they require only independent operations on each particle. The resampling
step is more difficult, as standard schemes require a collective operation,
such as a sum, across particle weights. Focusing on this resampling step, we
analyse two alternative schemes that do not involve a collective operation
(Metropolis and rejection resamplers), and compare them to standard schemes
(multinomial, stratified and systematic resamplers). We find that, in certain
circumstances, the alternative resamplers can perform significantly faster on a
GPU, and to a lesser extent on a CPU, than the standard approaches. Moreover,
in single precision, the standard approaches are numerically biased for upwards
of hundreds of thousands of particles, while the alternatives are not. This is
particularly important given greater single- than double-precision throughput
on modern devices, and the consequent temptation to use single precision with a
greater number of particles. Finally, we provide auxiliary functions useful for
implementation, such as for the permutation of ancestry vectors to enable
in-place propagation.Comment: 21 pages, 6 figure
- ā¦