1,315 research outputs found

    Instant restore after a media failure

    Full text link
    Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read/write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques

    Main Memory Implementations for Binary Grouping

    Full text link
    An increasing number of applications depend on efficient storage and analysis features for XML data. Hence, query optimization and efficient evaluation techniques for the emerging XQuery standard become more and more important. Many XQuery queries require nested expressions. Unnesting them often introduces binary grouping. We introduce several algorithms implementing binary grouping and analyze their time and space complexity. Experiments demonstrate their performance

    Bloch oscillations of Bose-Einstein condensates: Quantum counterpart of dynamical instability

    Full text link
    We study the Bloch dynamics of a quasi one-dimensional Bose-Einstein condensate of cold atoms in a tilted optical lattice modeled by a Hamiltonian of Bose-Hubbard type: The corresponding mean-field system described by a discrete nonlinear Schr\"odinger equation can show a dynamical (or modulation) instability due to chaotic dynamics and equipartition over the quasimomentum modes. It is shown, that these phenomena are related to a depletion of the Floquet-Bogoliubov states and a decoherence of the condensate in the many-particle description. Three different types of dynamics are distinguished: (i) decaying oscillations in the region of dynamical instability, and (ii) persisting Bloch oscillations or (iii) periodic decay and revivals in the region of stability.Comment: 12 pages, 14 figure

    Sampling-Based Query Re-Optimization

    Full text link
    Despite of decades of work, query optimizers still make mistakes on "difficult" queries because of bad cardinality estimates, often due to the interaction of multiple predicates and correlations in the data. In this paper, we propose a low-cost post-processing step that can take a plan produced by the optimizer, detect when it is likely to have made such a mistake, and take steps to fix it. Specifically, our solution is a sampling-based iterative procedure that requires almost no changes to the original query optimizer or query evaluation mechanism of the system. We show that this indeed imposes low overhead and catches cases where three widely used optimizers (PostgreSQL and two commercial systems) make large errors.Comment: This is the extended version of a paper with the same title and authors that appears in the Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD 2016

    Towards a Landau-Zener formula for an interacting Bose-Einstein condensate

    Full text link
    We consider the Landau-Zener problem for a Bose-Einstein condensate in a linearly varying two-level system, for the full many-particle system as well and in the mean-field approximation. The many-particle problem can be solved approximately within an independent crossings approximation, which yields an explicit Landau-Zener formula.Comment: RevTeX, 8 pages, 9 figure

    Run Generation Revisited: What Goes Up May or May Not Come Down

    Full text link
    In this paper, we revisit the classic problem of run generation. Run generation is the first phase of external-memory sorting, where the objective is to scan through the data, reorder elements using a small buffer of size M , and output runs (contiguously sorted chunks of elements) that are as long as possible. We develop algorithms for minimizing the total number of runs (or equivalently, maximizing the average run length) when the runs are allowed to be sorted or reverse sorted. We study the problem in the online setting, both with and without resource augmentation, and in the offline setting. (1) We analyze alternating-up-down replacement selection (runs alternate between sorted and reverse sorted), which was studied by Knuth as far back as 1963. We show that this simple policy is asymptotically optimal. Specifically, we show that alternating-up-down replacement selection is 2-competitive and no deterministic online algorithm can perform better. (2) We give online algorithms having smaller competitive ratios with resource augmentation. Specifically, we exhibit a deterministic algorithm that, when given a buffer of size 4M , is able to match or beat any optimal algorithm having a buffer of size M . Furthermore, we present a randomized online algorithm which is 7/4-competitive when given a buffer twice that of the optimal. (3) We demonstrate that performance can also be improved with a small amount of foresight. We give an algorithm, which is 3/2-competitive, with foreknowledge of the next 3M elements of the input stream. For the extreme case where all future elements are known, we design a PTAS for computing the optimal strategy a run generation algorithm must follow. (4) Finally, we present algorithms tailored for nearly sorted inputs which are guaranteed to have optimal solutions with sufficiently long runs

    From Cooperative Scans to Predictive Buffer Management

    Get PDF
    In analytical applications, database systems often need to sustain workloads with multiple concurrent scans hitting the same table. The Cooperative Scans (CScans) framework, which introduces an Active Buffer Manager (ABM) component into the database architecture, has been the most effective and elaborate response to this problem, and was initially developed in the X100 research prototype. We now report on the the experiences of integrating Cooperative Scans into its industrial-strength successor, the Vectorwise database product. During this implementation we invented a simpler optimization of concurrent scan buffer management, called Predictive Buffer Management (PBM). PBM is based on the observation that in a workload with long-running scans, the buffer manager has quite a bit of information on the workload in the immediate future, such that an approximation of the ideal OPT algorithm becomes feasible. In the evaluation on both synthetic benchmarks as well as a TPC-H throughput run we compare the benefits of naive buffer management (LRU) versus CScans, PBM and OPT; showing that PBM achieves benefits close to Cooperative Scans, while incurring much lower architectural impact.Comment: VLDB201

    Event Stream Processing with Multiple Threads

    Full text link
    Current runtime verification tools seldom make use of multi-threading to speed up the evaluation of a property on a large event trace. In this paper, we present an extension to the BeepBeep 3 event stream engine that allows the use of multiple threads during the evaluation of a query. Various parallelization strategies are presented and described on simple examples. The implementation of these strategies is then evaluated empirically on a sample of problems. Compared to the previous, single-threaded version of the BeepBeep engine, the allocation of just a few threads to specific portions of a query provides dramatic improvement in terms of running time

    Quantum tunneling as a classical anomaly

    Full text link
    Classical mechanics is a singular theory in that real-energy classical particles can never enter classically forbidden regions. However, if one regulates classical mechanics by allowing the energy E of a particle to be complex, the particle exhibits quantum-like behavior: Complex-energy classical particles can travel between classically allowed regions separated by potential barriers. When Im(E) -> 0, the classical tunneling probabilities persist. Hence, one can interpret quantum tunneling as an anomaly. A numerical comparison of complex classical tunneling probabilities with quantum tunneling probabilities leads to the conjecture that as ReE increases, complex classical tunneling probabilities approach the corresponding quantum probabilities. Thus, this work attempts to generalize the Bohr correspondence principle from classically allowed to classically forbidden regions.Comment: 12 pages, 7 figure

    Biorthogonal quantum mechanics

    Get PDF
    The Hermiticity condition in quantum mechanics required for the characterization of (a) physical observables and (b) generators of unitary motions can be relaxed into a wider class of operators whose eigenvalues are real and whose eigenstates are complete. In this case, the orthogonality of eigenstates is replaced by the notion of biorthogonality that defines the relation between the Hilbert space of states and its dual space. The resulting quantum theory, which might appropriately be called 'biorthogonal quantum mechanics', is developed here in some detail in the case for which the Hilbert-space dimensionality is finite. Specifically, characterizations of probability assignment rules, observable properties, pure and mixed states, spin particles, measurements, combined systems and entanglements, perturbations, and dynamical aspects of the theory are developed. The paper concludes with a brief discussion on infinite-dimensional systems. © 2014 IOP Publishing Ltd
    corecore