288,022 research outputs found

    Database recovery

    Get PDF
    Recovery techniques are an important aspect of database systems. They are essential to ensure that data integrity is maintained after any type of failure occurs. The recovery mechanism must be designed so that the availability and performance of the system are not unacceptably impacted by the recovery algorithms running during normal execution. On the other hand, enough information must be stored so that the database can be restored or transactions backed out in a reasonable amount of time. Concepts, techniques, and problems associated with database recovery will be presented in this thesis. The recovery issues for both centralized and distributed systems will be discussed, along with the tradeoffs of different recovery tools. The database recovery schemes in IMS/VS, DB2 and SDD-1 will be described to show approaches in existing systems

    Instant restore after a media failure

    Full text link
    Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read/write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques

    CHECKPOINTING AND RECOVERY IN DISTRIBUTED AND DATABASE SYSTEMS

    Get PDF
    A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the re- sults of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to be part of a transaction-consistent global checkpoint of the database. This result would be useful for constructing transaction-consistent global checkpoints incrementally from the checkpoints of each individual data item of a database. By applying this condition, we can start from any useful checkpoint of any data item and then incrementally add checkpoints of other data items until we get a transaction- consistent global checkpoint of the database. This result can also help in designing non-intrusive checkpointing protocols for database systems. Based on the intuition gained from the development of the necessary and sufficient conditions, we also de- veloped a non-intrusive low-overhead checkpointing protocol for distributed database systems. Checkpointing and rollback recovery are also established techniques for achiev- ing fault-tolerance in distributed systems. Communication-induced checkpointing algorithms allow processes involved in a distributed computation take checkpoints independently while at the same time force processes to take additional checkpoints to make each checkpoint to be part of a consistent global checkpoint. This thesis develops a low-overhead communication-induced checkpointing protocol and presents a performance evaluation of the protocol

    An Examination of How Robots, Artificial Intelligence, and Machinery Learning are Being Applied in the Medical and Healthcare Industries

    Get PDF
    Machine learning techniques are associated with diagnostics systems to apply methods that enable computers to link patient data to earlier data and give instructions to correct the disease.In recent years, researchers have promoted two or three data mining based techniques for disease diagnosis. Each function in machine learning and data mining techniques is built through characteristics and features.As a part of prognosis, information must be separated from patient data and information retrieved in stored databases and comparative records. For any disease, early diagnosis or diagnosis will determine the chances of a correct recovery. Disease prediction therefore becomes a more important task to support physicians in delivering efficient treatment to people.In health care, data is being created and disposed of at an extraordinary rate compared to the health care sectors. Data for medical profiling is often found in a variety of sources such as electronic health records, lab and imaging systems, doctor notes and accounts. The medical records database will then contain irrelevant data sourced from multiple sources. Preprocessing data and eliminating irrelevant data then immediately opening it up for predictive analysis is one of the significant difficulties of the health care industry

    ElasTraS: An Elastic Transactional Data Store in the Cloud

    Full text link
    Over the last couple of years, "Cloud Computing" or "Elastic Computing" has emerged as a compelling and successful paradigm for internet scale computing. One of the major contributing factors to this success is the elasticity of resources. In spite of the elasticity provided by the infrastructure and the scalable design of the applications, the elephant (or the underlying database), which drives most of these web-based applications, is not very elastic and scalable, and hence limits scalability. In this paper, we propose ElasTraS which addresses this issue of scalability and elasticity of the data store in a cloud computing environment to leverage from the elastic nature of the underlying infrastructure, while providing scalable transactional data access. This paper aims at providing the design of a system in progress, highlighting the major design choices, analyzing the different guarantees provided by the system, and identifying several important challenges for the research community striving for computing in the cloud.Comment: 5 Pages, In Proc. of USENIX HotCloud 200

    Middleware-based Database Replication: The Gaps between Theory and Practice

    Get PDF
    The need for high availability and performance in data management systems has been fueling a long running interest in database replication from both academia and industry. However, academic groups often attack replication problems in isolation, overlooking the need for completeness in their solutions, while commercial teams take a holistic approach that often misses opportunities for fundamental innovation. This has created over time a gap between academic research and industrial practice. This paper aims to characterize the gap along three axes: performance, availability, and administration. We build on our own experience developing and deploying replication systems in commercial and academic settings, as well as on a large body of prior related work. We sift through representative examples from the last decade of open-source, academic, and commercial database replication systems and combine this material with case studies from real systems deployed at Fortune 500 customers. We propose two agendas, one for academic research and one for industrial R&D, which we believe can bridge the gap within 5-10 years. This way, we hope to both motivate and help researchers in making the theory and practice of middleware-based database replication more relevant to each other.Comment: 14 pages. Appears in Proc. ACM SIGMOD International Conference on Management of Data, Vancouver, Canada, June 200

    Persistent Database Buffer Caching and Logging with Slotted Page Structure

    Get PDF
    Department of Computer Science and EngineeringEmerging byte-addressable persistent memory (PM) will be effective to improve the performance of computer system by reducing the redundant write operations. Traditional database management system uses recovery techniques to prevent data loss. The techniques copy the entire page into the block device storage several times for one insertion, so the amount of I/O is not negligible. In this work, we consider PM as main memory. Then, the durability of data in the buffer cache is ensured. To guarantee consistency, we exploit slotted page structure which is commonly used in database systems. We revisit that the slot header, which stores the metadata of the page in the slotted page structure, can act like a commit mark in the persistent database buffer cache. We then present two novel database management schemes using persistent buffer cache and slotted page. In-place commit scheme updates the page atomically using hardware transactional memory. It doesn't make any other copies and has optimal performance. Slot header logging scheme is needed for the case of updating pages more than one. Unlike the existing logging technique, slot header logging reduces the write operations by logging only commit mark. We implemented these schemes in SQLite and evaluate the performance compared with NVWAL, which is the state-of-the-art scheme. Our experiments show that in-place commit scheme needs only 3 cache line flush instructions for one insertion and slot header logging scheme reduces logging overhead at least 1/4.ope
    corecore