639 research outputs found
The Family of MapReduce and Large Scale Data Processing Systems
In the last two decades, the continuous increase of computational power has
produced an overwhelming flow of data which has called for a paradigm shift in
the computing architecture and large scale data processing mechanisms.
MapReduce is a simple and powerful programming model that enables easy
development of scalable parallel applications to process vast amounts of data
on large clusters of commodity machines. It isolates the application from the
details of running a distributed program such as issues on data distribution,
scheduling and fault tolerance. However, the original implementation of the
MapReduce framework had some limitations that have been tackled by many
research efforts in several followup works after its introduction. This article
provides a comprehensive survey for a family of approaches and mechanisms of
large scale data processing mechanisms that have been implemented based on the
original idea of the MapReduce framework and are currently gaining a lot of
momentum in both research and industrial communities. We also cover a set of
introduced systems that have been implemented to provide declarative
programming interfaces on top of the MapReduce framework. In addition, we
review several large scale data processing systems that resemble some of the
ideas of the MapReduce framework for different purposes and application
scenarios. Finally, we discuss some of the future research directions for
implementing the next generation of MapReduce-like solutions.Comment: arXiv admin note: text overlap with arXiv:1105.4252 by other author
Reify Your Collection Queries for Modularity and Speed!
Modularity and efficiency are often contradicting requirements, such that
programers have to trade one for the other. We analyze this dilemma in the
context of programs operating on collections. Performance-critical code using
collections need often to be hand-optimized, leading to non-modular, brittle,
and redundant code. In principle, this dilemma could be avoided by automatic
collection-specific optimizations, such as fusion of collection traversals,
usage of indexing, or reordering of filters. Unfortunately, it is not obvious
how to encode such optimizations in terms of ordinary collection APIs, because
the program operating on the collections is not reified and hence cannot be
analyzed.
We propose SQuOpt, the Scala Query Optimizer--a deep embedding of the Scala
collections API that allows such analyses and optimizations to be defined and
executed within Scala, without relying on external tools or compiler
extensions. SQuOpt provides the same "look and feel" (syntax and static typing
guarantees) as the standard collections API. We evaluate SQuOpt by
re-implementing several code analyses of the Findbugs tool using SQuOpt, show
average speedups of 12x with a maximum of 12800x and hence demonstrate that
SQuOpt can reconcile modularity and efficiency in real-world applications.Comment: 20 page
Recommended from our members
Physical Plan Instrumentation in Databases: Mechanisms and Applications
Database management systems (DBMSs) are designed with the goal set to compile SQL queries to physical plans that, when executed, provide results to the SQL queries. Building on this functionality, an ever-increasing number of application domains (e.g., provenance management, online query optimization, physical database design, interactive data profiling, monitoring, and interactive data visualization) seek to operate on how queries are executed by the DBMS for a wide variety of purposes ranging from debugging and data explanation to optimization and monitoring. Unfortunately, DBMSs provide little, if any, support to facilitate the development of this class of important application domains. The effect is such that database application developers and database system architects either rewrite the database internals in ad-hoc ways; work around the SQL interface, if possible, with inevitable performance penalties; or even build new databases from scratch only to express and optimize their domain-specific application logic over how queries are executed.
To address this problem in a principled manner in this dissertation, we introduce a prototype DBMS, namely, Smoke, that exposes instrumentation mechanisms in the form of a framework to allow external applications to manipulate physical plans. Intuitively, a physical plan is the underlying representation that DBMSs use to encode how a SQL query will be executed, and providing instrumentation mechanisms at this representation level allows applications to express and optimize their logic on how queries are executed.
Having such an instrumentation-enabled DBMS in-place, we then consider how to express and optimize applications that rely their logic on how queries are executed. To best demonstrate the expressive and optimization power of instrumentation-enabled DBMSs, we express and optimize applications across several important domains including provenance management, interactive data visualization, interactive data profiling, physical database design, online query optimization, and query discovery. Expressivity-wise, we show that Smoke can express known techniques, introduce novel semantics on known techniques, and introduce new techniques across domains. Performance-wise, we show case-by-case that Smoke is on par with or up-to several orders of magnitudes faster than state-of-the-art imperative and declarative implementations of important applications across domains.
As such, we believe our contributions provide evidence and form the basis towards a class of instrumentation-enabled DBMSs with the goal set to express and optimize applications across important domains with core logic over how queries are executed by DBMSs
An Efficient and Flexible Implementation of Aspect-Oriented Languages
Compilers for modern object-oriented programming languages generate code in a platform independent intermediate language preserving the concepts of the source language; for example, classes, fields, methods, and virtual or static dispatch can be directly identified within the intermediate code. To execute this intermediate code, state-of-the-art implementations of virtual machines perform just-in-time (JIT) compilation of the intermediate language; i.e., the virtual instructions in the intermediate code are compiled to native machine code at runtime. In this step, a declarative representation of source language concepts in the intermediate language facilitates highly efficient adaptive and speculative optimization of the running program which may not be possible otherwise. In contrast, constructs of aspect-oriented languages - which improve the separation of concerns - are commonly realized by compiling them to conventional intermediate language instructions or by driving transformations of the intermediate code, which is called weaving. This way the aspect-oriented constructs' semantics is not preserved in a declarative manner at the intermediate language level. This representational gap between aspect-oriented concepts in the source code and in the intermediate code hinders high performance optimizations and weakens features of software engineering processes like debugging support or the continuity property of incremental compilation: modifying an aspect in the source code potentially requires re-weaving multiple other modules. To leverage language implementation techniques for aspect-oriented languages, this thesis proposes the Aspect-Language Implementation Architecture (ALIA) which prescribes - amongst others - the existence of an intermediate representation preserving the aspect-oriented constructs of the source program. A central component of this architecture is an extensible and flexible meta-model of aspect-oriented concepts which acts as an interface between front-ends (usually a compiler) and back-ends (usually a virtual machine) of aspect-oriented language implementations. The architecture and the meta-model are embodied for Java-based aspect-oriented languages in the Framework for Implementing Aspect Languages (FIAL) respectively the Language-Independent Aspect Meta-Model (LIAM) which is part of the framework. FIAL generically implements the work flows required from an execution environment when executing aspects provided in terms of LIAM. In addition to the first-class intermediate representation of aspect-oriented concepts, ALIA - and the FIAL framework as its incarnation - treat the points of interaction between aspects and other modules - so-called join points - as being late-bound to an implementation. In analogy to the object-oriented terminology for late-bound methods, the join points are called virtual in ALIA. Together, the first-class representation of aspect-oriented concepts in the intermediate representation as well as treating join points as being virtual facilitate the implementation of new and effective optimizations for aspect-oriented programs. Three different instantiations of the FIAL framework are presented in this thesis, showcasing the feasibility of integrating language back-ends with different characteristics with the framework. One integration supports static aspect deployment and produces results similar to conventional aspect weavers; the woven code is executable on any standard Java virtual machine. Two instantiations are fully dynamic, where one is realized as a portable plug-in for standard Java virtual machines and the other one, called Steamloom^ALIA , is realized as a deep integration into a specific virtual machine, the Jikes Research Virtual Machine Alpern2005. While the latter instantiation is not portable, it exhibits an outstanding performance. Virtual join point dispatch is a generalization of virtual method dispatch. Thus, well established and elaborate optimization techniques from the field of virtual method dispatch are re-used with slight adaptations in Steamloom^ALIA . These optimizations for aspect-oriented concepts go beyond the generation of optimal bytecode. Especially strikingly, the power of such optimizations is shown in this thesis by the examples of the cflow dynamic property, which may be necessary to evaluate during virtual join point dispatch, and dynamic aspect deployment - i.e., the selective modification of specific join points' dispatch. In order to evaluate the optimization techniques developed in this thesis, a means for benchmarking has been developed in terms of macro-benchmarks; i.e., real-world applications are executed. These benchmarks show that for both concepts the implementation presented here is at least circa twice as fast as state-of-the-art implementations performing static optimizations of the generated bytecode; in many cases this thesis's optimizations even reach a speed-up of two orders of magnitude for the cflow implementation and even four orders of magnitude for the dynamic deployment. The intermediate representation in terms of LIAM models is general enough to express the constructs of multiple aspect-oriented languages. Therefore, optimizations of features common to different languages are available to applications written in all of them. To proof that the abstractions provided by LIAM are sufficient to act as intermediate language for multiple aspect-oriented source languages, an automated translation from source code to LIAM models has been realized for three very different and popular aspect-oriented languages: AspectJ, JAsCo and Compose*. In addition, the feasibility of translating from CaesarJ to LIAM models is shown by discussion. The use of an extensible meta-model as intermediate representation furthermore simplifies the definition of new aspect-oriented language concepts as is shown in terms of a tutorial-style example of designing a domain specific extension to the Java language in this thesis
A Functional, Comprehensive and Extensible Multi-Platform Querying and Transformation Approach
This thesis is about a new model querying and transformation approach called FunnyQT which is realized as a set of APIs and embedded domain-specific languages (DSLs) in the JVM-based functional Lisp-dialect Clojure. Founded on a powerful model management API, FunnyQT provides querying services such as comprehensions, quantified expressions, regular path expressions, logic-based, relational model querying, and pattern matching. On the transformation side, it supports the definition of unidirectional model-to-model transformations, of in-place transformations, it supports defining bidirectional transformations, and it supports a new kind of co-evolution transformations that allow for evolving a model together with its metamodel simultaneously. Several properties make FunnyQT unique. Foremost, it is just a Clojure library, thus, FunnyQT queries and transformations are Clojure programs. However, most higher-level services are provided as task-oriented embedded DSLs which use Clojure's powerful macro-system to support the user with tailor-made language constructs important for the task at hand. Since queries and transformations are just Clojure programs, they may use any Clojure or Java library for their own purpose, e.g., they may use some templating library for defining model-to-text transformations. Conversely, like every Clojure program, FunnyQT queries and transformations compile to normal JVM byte-code and can easily be called from other JVM languages. Furthermore, FunnyQT is platform-independent and designed with extensibility in mind. By default, it supports the Eclipse Modeling Framework and JGraLab, and support for other modeling frameworks can be added with minimal effort and without having to modify the respective framework's classes or FunnyQT itself. Lastly, because FunnyQT is embedded in a functional language, it has a functional emphasis itself. Every query and every transformation compiles to a function which can be passed around, given to higher-order functions, or be parametrized with other functions
Querying heterogeneous data in an in-situ unified agile system
Data integration provides a unified view of data by combining different data sources. In today’s multi-disciplinary and collaborative research environments, data is often produced and consumed by various means, multiple researchers operate on the data in different divisions to satisfy various research requirements, and using different query processors and analysis tools. This makes data integration a crucial component of any successful data intensive research activity. The fundamental difficulty is that data is heterogeneous not only in syntax, structure, and semantics, but also in the way it is accessed and queried. We introduce QUIS (QUery In-Situ), an agile query system equipped with a unified query language and a federated execution engine. It is capable of running queries on heterogeneous data sources in an in-situ manner. Its language provides advanced features such as virtual schemas, heterogeneous joins, and polymorphic result set representation. QUIS utilizes a federation of agents to transform a given input query written in its language to a (set of) computation models that are executable on the designated data sources. Federative query virtualization has the disadvantage that some aspects of a query may not be supported by the designated data sources. QUIS ensures that input queries are always fully satisfied. Therefore, if the target data sources do not fulfill all of the query requirements, QUIS detects the features that are lacking and complements them in a transparent manner. QUIS provides union and join capabilities over an unbound list of heterogeneous data sources; in addition, it offers solutions for heterogeneous query planning and optimization. In brief, QUIS is intended to mitigate data access heterogeneity through query virtualization, on-the-fly transformation, and federated execution. It offers in-Situ querying, agile querying, heterogeneous data source querying, unifeied execution, late-bound virtual schemas, and Remote execution
10381 Summary and Abstracts Collection -- Robust Query Processing
Dagstuhl seminar 10381 on robust query processing (held 19.09.10 -
24.09.10) brought together a diverse set of researchers and practitioners
with a broad range of expertise for the purpose of fostering discussion
and collaboration regarding causes, opportunities, and solutions for
achieving robust query processing.
The seminar strove to build a unified view across
the loosely-coupled system components responsible for
the various stages of database query processing.
Participants were chosen for their experience with database
query processing and, where possible, their prior work in academic
research or in product development towards robustness in database query
processing.
In order to pave the way to motivate, measure, and protect future advances
in robust query processing, seminar 10381 focused on developing tests
for measuring the robustness of query processing.
In these proceedings, we first review the seminar topics, goals,
and results, then present abstracts or notes of some of the seminar break-out
sessions.
We also include, as an appendix,
the robust query processing reading list that
was collected and distributed to participants before the seminar began,
as well as summaries of a few of those papers that were
contributed by some participants
- …