633 research outputs found
Middleware-based Database Replication: The Gaps between Theory and Practice
The need for high availability and performance in data management systems has
been fueling a long running interest in database replication from both academia
and industry. However, academic groups often attack replication problems in
isolation, overlooking the need for completeness in their solutions, while
commercial teams take a holistic approach that often misses opportunities for
fundamental innovation. This has created over time a gap between academic
research and industrial practice.
This paper aims to characterize the gap along three axes: performance,
availability, and administration. We build on our own experience developing and
deploying replication systems in commercial and academic settings, as well as
on a large body of prior related work. We sift through representative examples
from the last decade of open-source, academic, and commercial database
replication systems and combine this material with case studies from real
systems deployed at Fortune 500 customers. We propose two agendas, one for
academic research and one for industrial R&D, which we believe can bridge the
gap within 5-10 years. This way, we hope to both motivate and help researchers
in making the theory and practice of middleware-based database replication more
relevant to each other.Comment: 14 pages. Appears in Proc. ACM SIGMOD International Conference on
Management of Data, Vancouver, Canada, June 200
Recommended from our members
Decision support for build-to-order supply chain management through multiobjective optimization
This paper aims to identify the gaps in decision-making support based on
multiobjective optimization for build-to-order supply chain management (BTOSCM).
To this end, it reviews the literature available on modelling build-to-order
supply chains (BTO-SC) with the focus on adopting multiobjective optimization
(MOO) techniques as a decision support tool. The literature has been classified based
on the nature of the decisions in different part of the supply chain, and the key
decision areas across a typical BTO-SC are discussed in detail. Available software
packages suitable for supporting decision making in BTO supply chains are also
identified and their related solutions are outlined. The gap between the modelling and
optimization techniques developed in the literature and the decision support needed in
practice are highlighted and future research directions to better exploit the decision
support capabilities of MOO are proposed
On Predictive Modeling for Optimizing Transaction Execution in Parallel OLTP Systems
A new emerging class of parallel database management systems (DBMS) is
designed to take advantage of the partitionable workloads of on-line
transaction processing (OLTP) applications. Transactions in these systems are
optimized to execute to completion on a single node in a shared-nothing cluster
without needing to coordinate with other nodes or use expensive concurrency
control measures. But some OLTP applications cannot be partitioned such that
all of their transactions execute within a single-partition in this manner.
These distributed transactions access data not stored within their local
partitions and subsequently require more heavy-weight concurrency control
protocols. Further difficulties arise when the transaction's execution
properties, such as the number of partitions it may need to access or whether
it will abort, are not known beforehand. The DBMS could mitigate these
performance issues if it is provided with additional information about
transactions. Thus, in this paper we present a Markov model-based approach for
automatically selecting which optimizations a DBMS could use, namely (1) more
efficient concurrency control schemes, (2) intelligent scheduling, (3) reduced
undo logging, and (4) speculative execution. To evaluate our techniques, we
implemented our models and integrated them into a parallel, main-memory OLTP
DBMS to show that we can improve the performance of applications with diverse
workloads.Comment: VLDB201
Recommended from our members
Decision support for build-to-order supply chain management through multiobjective optimization
This is the post-print version of the final paper published in International Journal of Production Economics. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2010 Elsevier B.V.This paper aims to identify the gaps in decision-making support based on multiobjective optimization (MOO) for build-to-order supply chain management (BTO-SCM). To this end, it reviews the literature available on modelling build-to-order supply chains (BTO-SC) with the focus on adopting MOO techniques as a decision support tool. The literature has been classified based on the nature of the decisions in different part of the supply chain, and the key decision areas across a typical BTO-SC are discussed in detail. Available software packages suitable for supporting decision making in BTO supply chains are also identified and their related solutions are outlined. The gap between the modelling and optimization techniques developed in the literature and the decision support needed in practice are highlighted. Future research directions to better exploit the decision support capabilities of MOO are proposed. These include: reformulation of the extant optimization models with a MOO perspective, development of decision supports for interfaces not involving manufacturers, development of scenarios around service-based objectives, development of efficient solution tools, considering the interests of each supply chain party as a separate objective to account for fair treatment of their requirements, and applying the existing methodologies on real-life data sets.Brunel Research Initiative and Enterprise Fund (BRIEF
Research on parallel logic language implementation and architecture at ICOT
Abstract is not availabl
A Safety-First Approach to Memory Models.
Sequential consistency (SC) is arguably the most intuitive behavior for a shared-memory multithreaded program. It is widely accepted that language-level SC could significantly improve programmability of a multiprocessor system. However, efficiently supporting end-to-end SC remains a challenge as it requires that both compiler and hardware optimizations preserve SC semantics.
Current concurrent languages support a relaxed memory model that requires programmers to explicitly annotate all memory accesses that can participate in a data-race ("unsafe" accesses). This requirement allows compiler and hardware to aggressively optimize unannotated accesses, which are assumed to be data-race-free ("safe" accesses), while still preserving SC semantics. However, unannotated data races are easy for programmers to accidentally introduce and are difficult to detect, and in such cases the safety and correctness of programs are significantly compromised.
This dissertation argues instead for a safety-first approach, whereby every memory operation is treated as potentially unsafe by the compiler and hardware unless it is proven otherwise.
The first solution, DRFx memory model, allows many common compiler and hardware optimizations (potentially SC-violating) on unsafe accesses and uses a runtime support to detect potential SC violations arising from reordering of unsafe accesses. On detecting a potential SC violation, execution is halted before the safety property is compromised.
The second solution takes a different approach and preserves SC in both compiler and hardware. Both SC-preserving compiler and hardware are also built on the safety-first approach. All memory accesses are treated as potentially unsafe by the compiler and hardware. SC-preserving hardware relies on different static and dynamic techniques to identify safe accesses. Our results indicate that supporting SC at the language level is not expensive in terms of performance and hardware complexity.
The dissertation also explores an extension of this safety-first approach for data-parallel accelerators such as Graphics Processing Units (GPUs). Significant microarchitectural differences between CPU and GPU require rethinking of efficient solutions for preserving SC in GPUs. The proposed solution based on our SC-preserving approach performs nearly on par with the baseline GPU that implements a data-race-free-0 memory model.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120794/1/ansingh_1.pd
Low Power Processor Architectures and Contemporary Techniques for Power Optimization – A Review
The technological evolution has increased the number of transistors for a given die area significantly and increased the switching speed from few MHz to GHz range. Such inversely proportional decline in size and boost in performance consequently demands shrinking of supply voltage and effective power dissipation in chips with millions of transistors. This has triggered substantial amount of research in power reduction techniques into almost every aspect of the chip and particularly the processor cores contained in the chip. This paper presents an overview of techniques for achieving the power efficiency mainly at the processor core level but also visits related domains such as buses and memories. There are various processor parameters and features such as supply voltage, clock frequency, cache and pipelining which can be optimized to reduce the power consumption of the processor. This paper discusses various ways in which these parameters can be optimized. Also, emerging power efficient processor architectures are overviewed and research activities are discussed which should help reader identify how these factors in a processor contribute to power consumption. Some of these concepts have been already established whereas others are still active research areas. © 2009 ACADEMY PUBLISHER
Performance Optimization Strategies for Transactional Memory Applications
This thesis presents tools for Transactional Memory (TM) applications that cover multiple TM systems (Software, Hardware, and hybrid TM) and use information of all different layers of the TM software stack. Therefore, this thesis addresses a number of challenges to extract static information, information about the run time behavior, and expert-level knowledge to develop these new methods and strategies for the optimization of TM applications
New Directions in Cloud Programming
Nearly twenty years after the launch of AWS, it remains difficult for most
developers to harness the enormous potential of the cloud. In this paper we lay
out an agenda for a new generation of cloud programming research aimed at
bringing research ideas to programmers in an evolutionary fashion. Key to our
approach is a separation of distributed programs into a PACT of four facets:
Program semantics, Availablity, Consistency and Targets of optimization. We
propose to migrate developers gradually to PACT programming by lifting familiar
code into our more declarative level of abstraction. We then propose a
multi-stage compiler that emits human-readable code at each stage that can be
hand-tuned by developers seeking more control. Our agenda raises numerous
research challenges across multiple areas including language design, query
optimization, transactions, distributed consistency, compilers and program
synthesis
- …