23 research outputs found

    An Empirical Investigation of Four Strategies for Serializing Schedules in Transaction Processing

    Get PDF
    A database management system (DBMS) is a very large program that allows users to create and maintain databases. A DBMS has many capabilities. This study will focus on the capability known as transaction management, the capability to provide correct, concurrent access to the database by many users at the same time. If a DBMS did not provide transaction management, livelocks, deadlocks, and non-serializable schedules could occur. A livelock can occur when a transaction is waiting on a locked data item, and another transaction appears. After the data item is unlocked, the second transaction locks the data item, which causes the first transaction to continue waiting. Conceivably, the first transaction could wait indefinitely to lock the data item. This situation is called livelock. Deadlock is a situation in which each member of a set of two or more transactions is waiting to lock an item currently locked by some other transaction in the set. None of the transactions can proceed, so they all wait indefinitely. A schedule is serial if for every pair of transactions, all of the operations of one transaction execute before any of the operations of the other transaction. A schedule is serializable if its effect on the database is the same as some serial execution of the same set of transactions. A schedule is nonserializable if its effect on the database is not equivalent to that of any serial schedule which processes the same transactions. The scheduler is a component of the DBMS, and it is responsible for resolving any livelocks, deadlocks, or non-serializable schedules that occur. This study looks specifically at non-serializable schedules. There are many methods by which the scheduler can serialize non-serializable schedules. This study proposes and examines four strategies to detect and resolve non-serializable schedules. Computer simulation is used to examine the four strategies. These strategies reduce a nonserializable schedule to a serializable or a serial schedule, thus eliminating the possibility of incorrectly updating data items within a database. It is shown experimentally that, of the four strategies, the one that delays the transaction which has executed the least number of steps until non-serializability is detected is the best

    A bibliography on formal methods for system specification, design and validation

    Get PDF
    Literature on the specification, design, verification, testing, and evaluation of avionics systems was surveyed, providing 655 citations. Journal papers, conference papers, and technical reports are included. Manual and computer-based methods were employed. Keywords used in the online search are listed

    Staring into the abyss: An evaluation of concurrency control with one thousand cores

    Get PDF
    Computer architectures are moving towards an era dominated by many-core machines with dozens or even hundreds of cores on a single chip. This unprecedented level of on-chip parallelism introduces a new dimension to scalability that current database management systems (DBMSs) were not designed for. In particular, as the number of cores increases, the problem of concurrency control becomes extremely challenging. With hundreds of threads running in parallel, the complexity of coordinating competing accesses to data will likely diminish the gains from increased core counts. To better understand just how unprepared current DBMSs are for future CPU architectures, we performed an evaluation of concurrency control for on-line transaction processing (OLTP) workloads on many-core chips. We implemented seven concurrency control algorithms on a main-memory DBMS and using computer simulations scaled our system to 1024 cores. Our analysis shows that all algorithms fail to scale to this magnitude but for different reasons. In each case, we identify fundamental bottlenecks that are independent of the particular database implementation and argue that even state-of-the-art DBMSs suffer from these limitations. We conclude that rather than pursuing incremental solutions, many-core chips may require a completely redesigned DBMS architecture that is built from ground up and is tightly coupled with the hardware.Intel Corporation (Science and Technology Center for Big Data
    corecore