11,618 research outputs found
Adaptive Execution of Compiled Queries
Compiling queries to machine code is arguably the most efficient way for executing queries. One often overlooked problem with compilation, however, is the time it takes to generate machine code. Even with fast compilation frameworks like LLVM, Generating machine code for complex queries routinely takes hundreds of milliseconds. Such compilation times can be a major disadvantage for workloads that execute many complex, but quick queries. To solve this problem, we propose an adaptive execution framework, which dynamically and transparently switches from interpretation to compilation. We also propose a fast bytecode interpreter for LLVM, which can execute queries without costly translation to machine code and thereby dramatically reduces query latency. Adaptive execution is dynamic, fine-grained, and can execute different code paths of the same query using different execution modes. Our extensive evaluation shows that this approach achieves optimal performance in a wide variety from settings---low latency for small data sets and maximum throughput for large data sizes
AMaχoS—Abstract Machine for Xcerpt
Web query languages promise convenient and efficient access
to Web data such as XML, RDF, or Topic Maps. Xcerpt is one such Web
query language with strong emphasis on novel high-level constructs for
effective and convenient query authoring, particularly tailored to versatile
access to data in different Web formats such as XML or RDF.
However, so far it lacks an efficient implementation to supplement the
convenient language features. AMaχoS is an abstract machine implementation
for Xcerpt that aims at efficiency and ease of deployment. It
strictly separates compilation and execution of queries: Queries are compiled
once to abstract machine code that consists in (1) a code segment
with instructions for evaluating each rule and (2) a hint segment that
provides the abstract machine with optimization hints derived by the
query compilation. This article summarizes the motivation and principles
behind AMaχoS and discusses how its current architecture realizes
these principles
A Context-Oriented Extension of F#
Context-Oriented programming languages provide us with primitive constructs
to adapt program behaviour depending on the evolution of their operational
environment, namely the context. In previous work we proposed ML_CoDa, a
context-oriented language with two-components: a declarative constituent for
programming the context and a functional one for computing. This paper
describes the implementation of ML_CoDa as an extension of F#.Comment: In Proceedings FOCLASA 2015, arXiv:1512.0694
Enabling Adaptive Grid Scheduling and Resource Management
Wider adoption of the Grid concept has led to an increasing amount of federated
computational, storage and visualisation resources being available to scientists and
researchers. Distributed and heterogeneous nature of these resources renders most of the
legacy cluster monitoring and management approaches inappropriate, and poses new
challenges in workflow scheduling on such systems. Effective resource utilisation monitoring
and highly granular yet adaptive measurements are prerequisites for a more efficient Grid
scheduler. We present a suite of measurement applications able to monitor per-process
resource utilisation, and a customisable tool for emulating observed utilisation models. We
also outline our future work on a predictive and probabilistic Grid scheduler. The research is
undertaken as part of UK e-Science EPSRC sponsored project SO-GRM (Self-Organising
Grid Resource Management) in cooperation with BT
Tupleware: Redefining Modern Analytics
There is a fundamental discrepancy between the targeted and actual users of
current analytics frameworks. Most systems are designed for the data and
infrastructure of the Googles and Facebooks of the world---petabytes of data
distributed across large cloud deployments consisting of thousands of cheap
commodity machines. Yet, the vast majority of users operate clusters ranging
from a few to a few dozen nodes, analyze relatively small datasets of up to a
few terabytes, and perform primarily compute-intensive operations. Targeting
these users fundamentally changes the way we should build analytics systems.
This paper describes the design of Tupleware, a new system specifically aimed
at the challenges faced by the typical user. Tupleware's architecture brings
together ideas from the database, compiler, and programming languages
communities to create a powerful end-to-end solution for data analysis. We
propose novel techniques that consider the data, computations, and hardware
together to achieve maximum performance on a case-by-case basis. Our
experimental evaluation quantifies the impact of our novel techniques and shows
orders of magnitude performance improvement over alternative systems
Main Memory Adaptive Indexing for Multi-core Systems
Adaptive indexing is a concept that considers index creation in databases as
a by-product of query processing; as opposed to traditional full index creation
where the indexing effort is performed up front before answering any queries.
Adaptive indexing has received a considerable amount of attention, and several
algorithms have been proposed over the past few years; including a recent
experimental study comparing a large number of existing methods. Until now,
however, most adaptive indexing algorithms have been designed single-threaded,
yet with multi-core systems already well established, the idea of designing
parallel algorithms for adaptive indexing is very natural. In this regard only
one parallel algorithm for adaptive indexing has recently appeared in the
literature: The parallel version of standard cracking. In this paper we
describe three alternative parallel algorithms for adaptive indexing, including
a second variant of a parallel standard cracking algorithm. Additionally, we
describe a hybrid parallel sorting algorithm, and a NUMA-aware method based on
sorting. We then thoroughly compare all these algorithms experimentally; along
a variant of a recently published parallel version of radix sort. Parallel
sorting algorithms serve as a realistic baseline for multi-threaded adaptive
indexing techniques. In total we experimentally compare seven parallel
algorithms. Additionally, we extensively profile all considered algorithms. The
initial set of experiments considered in this paper indicates that our parallel
algorithms significantly improve over previously known ones. Our results
suggest that, although adaptive indexing algorithms are a good design choice in
single-threaded environments, the rules change considerably in the parallel
case. That is, in future highly-parallel environments, sorting algorithms could
be serious alternatives to adaptive indexing.Comment: 26 pages, 7 figure
Improving Function Coverage with Munch: A Hybrid Fuzzing and Directed Symbolic Execution Approach
Fuzzing and symbolic execution are popular techniques for finding
vulnerabilities and generating test-cases for programs. Fuzzing, a blackbox
method that mutates seed input values, is generally incapable of generating
diverse inputs that exercise all paths in the program. Due to the
path-explosion problem and dependence on SMT solvers, symbolic execution may
also not achieve high path coverage. A hybrid technique involving fuzzing and
symbolic execution may achieve better function coverage than fuzzing or
symbolic execution alone. In this paper, we present Munch, an open source
framework implementing two hybrid techniques based on fuzzing and symbolic
execution. We empirically show using nine large open-source programs that
overall, Munch achieves higher (in-depth) function coverage than symbolic
execution or fuzzing alone. Using metrics based on total analyses time and
number of queries issued to the SMT solver, we also show that Munch is more
efficient at achieving better function coverage.Comment: To appear at 33rd ACM/SIGAPP Symposium On Applied Computing (SAC). To
be held from 9th to 13th April, 201
A Tamper and Leakage Resilient von Neumann Architecture
We present a universal framework for tamper and leakage resilient computation on a von
Neumann Random Access Architecture (RAM in short). The RAM has one CPU that accesses
a storage, which we call the disk. The disk is subject to leakage and tampering. So is the bus
connecting the CPU to the disk. We assume that the CPU is leakage and tamper-free. For
a fixed value of the security parameter, the CPU has constant size. Therefore the code of the
program to be executed is stored on the disk, i.e., we consider a von Neumann architecture. The
most prominent consequence of this is that the code of the program executed will be subject to
tampering.
We construct a compiler for this architecture which transforms any keyed primitive into a
RAM program where the key is encoded and stored on the disk along with the program to
evaluate the primitive on that key. Our compiler only assumes the existence of a so-called
continuous non-malleable code, and it only needs black-box access to such a code. No further
(cryptographic) assumptions are needed. This in particular means that given an information
theoretic code, the overall construction is information theoretic secure.
Although it is required that the CPU is tamper and leakage proof, its design is independent
of the actual primitive being computed and its internal storage is non-persistent, i.e., all secret
registers are reset between invocations. Hence, our result can be interpreted as reducing the
problem of shielding arbitrary complex computations to protecting a single, simple yet universal
component
- …