1,806 research outputs found

    Staring into the abyss: An evaluation of concurrency control with one thousand cores

    Get PDF
    Computer architectures are moving towards an era dominated by many-core machines with dozens or even hundreds of cores on a single chip. This unprecedented level of on-chip parallelism introduces a new dimension to scalability that current database management systems (DBMSs) were not designed for. In particular, as the number of cores increases, the problem of concurrency control becomes extremely challenging. With hundreds of threads running in parallel, the complexity of coordinating competing accesses to data will likely diminish the gains from increased core counts. To better understand just how unprepared current DBMSs are for future CPU architectures, we performed an evaluation of concurrency control for on-line transaction processing (OLTP) workloads on many-core chips. We implemented seven concurrency control algorithms on a main-memory DBMS and using computer simulations scaled our system to 1024 cores. Our analysis shows that all algorithms fail to scale to this magnitude but for different reasons. In each case, we identify fundamental bottlenecks that are independent of the particular database implementation and argue that even state-of-the-art DBMSs suffer from these limitations. We conclude that rather than pursuing incremental solutions, many-core chips may require a completely redesigned DBMS architecture that is built from ground up and is tightly coupled with the hardware.Intel Corporation (Science and Technology Center for Big Data

    Channel Access Management in Data Intensive Sensor Networks

    Get PDF
    There are considerable challenges for channel access in Data Intensive Sensor Networks - DISN, supporting Data Intensive Applications like Structural Health Monitoring. As the data load increases, considerable degradation of the key performance parameters of such sensor networks is observed. Successful packet delivery ratio drops due to frequent collisions and retransmissions. The data glut results in increased latency and energy consumption overall. With the considerable limitations on sensor node resources like battery power, this implies that excessive transmissions in response to sensor queries can lead to premature network death. After a certain load threshold the performance characteristics of traditional WSNs become unacceptable. Research work indicates that successful packet delivery ratio in 802.15.4 networks can drop from 95% to 55% as the offered network load increases from 1 packet/sec to 10 packets/sec. This result in conjunction with the fact that it is common for sensors in an SHM system to generate 6-8 packets/sec of vibration data makes it important to design appropriate channel access schemes for such data intensive applications.In this work, we address the problem of significant performance degradation in a special-purpose DISN. Our specific focus is on the medium access control layer since it gives a fine-grained control on managing channel access and reducing energy waste. The goal of this dissertation is to design and evaluate a suite of channel access schemes that ensure graceful performance degradation in special-purpose DISNs as the network traffic load increases.First, we present a case study that investigates two distinct MAC proposals based on random access and scheduling access. The results of the case study provide the motivation to develop hybrid access schemes. Next, we introduce novel hybrid channel access protocols for DISNs ranging from a simple randomized transmission scheme that is robust under channel and topology dynamics to one that utilizes limited topological information about neighboring sensors to minimize collisions and energy waste. The protocols combine randomized transmission with heuristic scheduling to alleviate network performance degradation due to excessive collisions and retransmissions. We then propose a grid-based access scheduling protocol for a mobile DISN that is scalable and decentralized. The grid-based protocol efficiently handles sensor mobility with acceptable data loss and limited overhead. Finally, we extend the randomized transmission protocol from the hybrid approaches to develop an adaptable probability-based data transmission method. This work combines probabilistic transmission with heuristics, i.e., Latin Squares and a grid network, to tune transmission probabilities of sensors, thus meeting specific performance objectives in DISNs. We perform analytical evaluations and run simulation-based examinations to test all of the proposed protocols

    A Survey of Traditional and Practical Concurrency Control in Relational Database Management Systems

    Get PDF
    Traditionally, database theory has focused on concepts such as atomicity and serializability, asserting that concurrent transaction management must enable correctness above all else. Textbooks and academic journals detail a vision of unbounded rationality, where reduced throughput because of concurrency protocols is not of tremendous concern. This thesis seeks to survey the traditional basis for concurrency in relational database management systems and contrast that with actual practice. SQL-92, the current standard for concurrency in relational database management systems has defined isolation, or allowable concurrency levels, and these are examined. Some ways in which DB2, a popular database, interprets these levels and finesses extra concurrency through performance enhancement are detailed. SQL-92 standardizes de facto relational database management systems features. Given this and a superabundance of articles in professional journals detailing steps for fine-tuning transaction concurrency, the expansion of performance tuning seems bright, even at the expense of serializabilty. Are the practical changes wrought by non-academic professionals killing traditional database concurrency ideals? Not really. Reasoned changes for performance gains advocate compromise, using complex concurrency controls when necessary for the job at hand and relaxing standards otherwise. The idea of relational database management systems is only twenty years old, and standards are still evolving. Is there still an interplay between tradition and practice? Of course. Current practice uses tradition pragmatically, not idealistically. Academic ideas help drive the systems available for use, and perhaps current practice now will help academic ideas define concurrency control concepts for relational database management systems

    Stretching the capacity of Hardware Transactional Memory in IBM POWER architectures

    Full text link
    The hardware transactional memory (HTM) implementations in commercially available processors are significantly hindered by their tight capacity constraints. In practice, this renders current HTMs unsuitable to many real-world workloads of in-memory databases. This paper proposes SI-HTM, which stretches the capacity bounds of the underlying HTM, thus opening HTM to a much broader class of applications. SI-HTM leverages the HTM implementation of the IBM POWER architecture with a software layer to offer a single-version implementation of Snapshot Isolation. When compared to HTM- and software-based concurrency control alternatives, SI-HTM exhibits improved scalability, achieving speedups of up to 300% relatively to HTM on in-memory database benchmarks
    • …
    corecore