780 research outputs found

    Hybrid concurrency control and recovery for multi-level transactions

    Get PDF
    Multi-level transaction schedulers adapt confiict-serializability on different levels. They exploit the fact that many low-level conflicts (e.g. on the level of pages) become irrelevant, if higher-level application semantics is taken into account. Multi-level transactions may lead to an increase in concurrency. It is easy to generalize locking protocols to the case of multi-level transactions. In this, however, the possibility of deadlocks may diminish the increase in concurrency. This stimulates the investigation of optimistic or hybrid approaches to concurrency control. Until now no hybrid concurrency control protocol for multi-level transactions has been published. The new FoPL protocol (Forward oriented Concurrency Control with Preordered Locking) is such a protocol. It employs access lists on the database objects and forward oriented commit validation. The basic test on all levels is based on the reordering of the access lists. When combined with queueing and deadlock detection, the protocol is not only sound, but also complete for multi-level serializable schedules. This is definitely an advantage of FoPL compared with locking protocols. The complexity of deadlock detection is not crucial, since waiting transactions do not hold locks on database objects. Furthermore, the basic FoPL protocol can be optimized in various ways. Since the concurrency control protocol may force transactions to be aborted, it is necessary to support operation logging. It is shown that as well as multi-level locking protocols can be easily coupled with the ARIES algorithms. This also solves the problem of rollback during normal processing and crash recovery

    Acta Cybernetica : Volume 14. Number 3.

    Get PDF

    Welfare-to-Work Experiences with Specific Work-First Programmes in Selected Countries

    Get PDF
    This paper reviews the evidence of specific mandatory work-first programmes (job search assistance and workfare) for welfare recipients in the United States, the United Kingdom, Denmark, the Netherlands and Germany. It primarily refers to experimental and econometric evaluations. The effectiveness of specific programme elements in promoting the transition from welfare to work is compared. The advantage of combining work-first with training programmes and inwork benefits is discussed. Some policy conclusions are drawn.welfare-to-work, evaluations

    Theory and Practice of Transactional Method Caching

    Get PDF
    Nowadays, tiered architectures are widely accepted for constructing large scale information systems. In this context application servers often form the bottleneck for a system's efficiency. An application server exposes an object oriented interface consisting of set of methods which are accessed by potentially remote clients. The idea of method caching is to store results of read-only method invocations with respect to the application server's interface on the client side. If the client invokes the same method with the same arguments again, the corresponding result can be taken from the cache without contacting the server. It has been shown that this approach can considerably improve a real world system's efficiency. This paper extends the concept of method caching by addressing the case where clients wrap related method invocations in ACID transactions. Demarcating sequences of method calls in this way is supported by many important application server standards. In this context the paper presents an architecture, a theory and an efficient protocol for maintaining full transactional consistency and in particular serializability when using a method cache on the client side. In order to create a protocol for scheduling cached method results, the paper extends a classical transaction formalism. Based on this extension, a recovery protocol and an optimistic serializability protocol are derived. The latter one differs from traditional transactional cache protocols in many essential ways. An efficiency experiment validates the approach: Using the cache a system's performance and scalability are considerably improved

    Improving OLTP Concurrency through Early Lock Release

    Get PDF
    Since the beginning of the multi-core era, database systems research has restarted focusing on increasing concurrency. Even though database systems have been able to accommodate concurrent requests, the exploding number of available cores per chip has surfaced new difficulties. More and more transactions can be served in parallel (since more threads can run simultaneously) and, thus, concurrency in a database system is more important than ever in order to exploit the available resources. In this paper, we evaluate Early Lock Release (ELR), a technique that allows early release of locks to improve concurrency level and overall throughput in OLTP. This technique has been proven to lead to a database system that can produce correct and recoverable histories but it has never been implemented in a full scale DBMS. A new action is introduced which decouples the commit action from the log flush to non-volatile storage. ELR can help us increase the concurrency and the predictability of a database system without losing correctness and recoverability. We conclude that applying ELR on a DBMS, especially with a centralized log scheme makes absolute sense, because (a) it carries negligible overhead, (b) it improves the concurrency level by allowing a transaction to acquire the necessary locks when the previous holder of the locks has finished its useful work (and not after committing to disk), and (c), as a result, the overall throughput can be increased by up to 2x for TPCC and 7x for TPCB workloads. Additionally, the variation of the waiting time of the log flush is zeroed because transactions no longer wait for a log flush before they can release their locks

    How to quantify coherence: Distinguishing speakable and unspeakable notions

    Get PDF
    Quantum coherence is a critical resource for many operational tasks. Understanding how to quantify and manipulate it also promises to have applications for a diverse set of problems in theoretical physics. For certain applications, however, one requires coherence between the eigenspaces of specific physical observables, such as energy, angular momentum, or photon number, and it makes a difference which eigenspaces appear in the superposition. For others, there is a preferred set of subspaces relative to which coherence is deemed a resource, but it is irrelevant which of the subspaces appear in the superposition. We term these two types of coherence unspeakable and speakable respectively. We argue that a useful approach to quantifying and characterizing unspeakable coherence is provided by the resource theory of asymmetry when the symmetry group is a group of translations, and we translate a number of prior results on asymmetry into the language of coherence. We also highlight some of the applications of this approach, for instance, in the context of quantum metrology, quantum speed limits, quantum thermodynamics, and NMR. The question of how best to treat speakable coherence as a resource is also considered. We review a popular approach in terms of operations that preserve the set of incoherent states, propose an alternative approach in terms of operations that are covariant under dephasing, and we outline the challenge of providing a physical justification for either approach. Finally, we note some mathematical connections that hold among the different approaches to quantifying coherence.Comment: A non-technical summary of the results and applications is provided in the first section. V5 close to the published version. Typos correcte
    • 

    corecore