1,983 research outputs found

    SAP HANA Platform

    Get PDF
    Tato práce pojednává o databázi pracující v paměti nazývané SAP HANA. Detailně popisuje architekturu a nové technologie, které tato databáze využívá. V další části se zabývá porovnáním rychlosti provedení vkládání a vybírání záznamů z databáze se stávající používanou relační databází MaxDB. Pro účely tohoto testování jsem vytvořil jednoduchou aplikaci v jazyce ABAP, která umožňuje testy provádět a zobrazuje jejich výsledky. Ty jsou shrnuty v poslední kapitole a ukazují SAP HANA jako jednoznačně rychlejší ve vybírání dat, avšak srovnatelnou, či pomalejší při vkládání dat do databáze. Přínos mé práce vidím v shrnutí podstatných změn, které s sebou data uložená v paměti přináší a názorné srovnání rychlosti provedení základních typů dotazů.This thesis discusses the in-memory database called SAP HANA. It describes in detail the architecture and new technologies used in this type of database. The next section presents a comparison of speed of the inserting and selecting data from the database with existing relational database MaxDB. For the purposes of this testing I created a simple application in ABAP language, which allows user to perform and display their results. These are summarized in the last chapter and demonstrate SAP HANA as clearly faster during selection of data, but comparable, or slower when inserting data into the database. I see contribution of my work in the summary of significant changes that come with data stored in the main memory and brings comparison of speed of basic types of queries.

    Tracking a Web Site\u27s Historical Links with Temporal URLs

    Get PDF
    he historical links of a web site include the URLs invalidated due to web site reorganization, document removal, renaming or relocation, or links to document snapshots, which are defined as the document’s contents as of a specific point in time. Tracking historical links will allow users to use out-of-date URLs, retrieve removed documents and document snapshots. This paper presents a logging and archiving scheme to track a document’s history of changes, and a Temporal URL scheme for users to submit a URL with temporal requirements. With the proposed schemes, a web site is able to track its historical links and provide better searching and information for its users

    An Initial Design of a Website Snapshot Management System

    Get PDF
    A website snapshot is the state of a web site at a specific point in time. It supports applications that require historical data. Most website snapshots are created by making a copy of the website. These date-time stamped physical snapshots are unable to satisfy users’ need for snapshots of different snapshot times. This research proposes a scheme that is able to create website snapshots that meet any snapshot time requirements by recording changes to a website in a log. For web pages producing dynamic content from a database, this scheme will allow the pages to access a database snapshot at the website snapshot time

    The Design of a Web Snapshot Management System for Decision Support Applications

    Get PDF
    Database snapshots that are defined and/or delivered via the Web are called web snapshots. This paper addresses the requirements for web snapshot management. A web snapshot management system is proposed; its architecture and functions of the major components are described; new commands are defined to perform web snapshot management activities

    The Design of a Web Document Snapshots Delivery System

    Get PDF
    A web document snapshot is a point-in-time capture of its code and the resulting presentation of executing the code. It is used as a way of electronically preserving historical information published in web documents enabling an organization to audit a web document’s contents at a point in the past and perform business analyses with historical information recorded in it. It is also an archived copy of a web document when it is changed. This research develops a system to deliver snapshots of a web document’s static and dynamic contents when it is requested. The system consists of a Database Snapshot Manager for providing database snapshots and a Web Document Snapshot Manager for providing web document snapshots. Algorithms supporting the two managers are presented

    Instant restore after a media failure

    Full text link
    Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read/write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques

    Speedy Transactions in Multicore In-Memory Databases

    Get PDF
    Silo is a new in-memory database that achieves excellent performance and scalability on modern multicore machines. Silo was designed from the ground up to use system memory and caches efficiently. For instance, it avoids all centralized contention points, including that of centralized transaction ID assignment. Silo's key contribution is a commit protocol based on optimistic concurrency control that provides serializability while avoiding all shared-memory writes for records that were only read. Though this might seem to complicate the enforcement of a serial order, correct logging and recovery is provided by linking periodically-updated epochs with the commit protocol. Silo provides the same guarantees as any serializable database without unnecessary scalability bottlenecks or much additional latency. Silo achieves almost 700,000 transactions per second on a standard TPC-C workload mix on a 32-core machine, as well as near-linear scalability. Considered per core, this is several times higher than previously reported results.Engineering and Applied Science

    Fast transactions for multicore in-memory databases

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 55-57).Though modern multicore machines have sufficient RAM and processors to manage very large in-memory databases, it is not clear what the best strategy for dividing work among cores is. Should each core handle a data partition, avoiding the overhead of concurrency control for most transactions (at the cost of increasing it for cross-partition transactions)? Or should cores access a shared data structure instead? We investigate this question in the context of a fast in-memory database. We describe a new transactionally consistent database storage engine called MAFLINGO. Its cache-centered data structure design provides excellent base key-value store performance, to which we add a new, cache-friendly serializable protocol and support for running large, read-only transactions on a recent snapshot. On a key-value workload, the resulting system introduces negligible performance overhead as compared to a version of our system with transactional support stripped out, while achieving linear scalability versus the number of cores. It also exhibits linear scalability on TPC-C, a popular transactional benchmark. In addition, we show that a partitioning-based approach ceases to be beneficial if the database cannot be partitioned such that only a small fraction of transactions access multiple partitions, making our shared-everything approach more relevant. Finally, based on a survey of results from the literature, we argue that our implementation substantially outperforms previous main-memory databases on TPC-C benchmarks.by Stephen Lyle Tu.S.M

    Supporting End-Users\u27 Non-Consistent Views for Decision Support Applications

    Get PDF
    Database views typically are maintained to be consistent with the database at a specific point in time. There are applications, however, where users may prefer or require views which are not consistent with the database. Supporting non-consistent views provides better information for end-users in a decision support environment such as data warehousing. This paper examines thecharacteristics of non-consistent views and investigates their management

    The H.E.S.S. central data acquisition system

    Full text link
    The High Energy Stereoscopic System (H.E.S.S.) is a system of Imaging Atmospheric Cherenkov Telescopes (IACTs) located in the Khomas Highland in Namibia. It measures cosmic gamma rays of very high energies (VHE; >100 GeV) using the Earth's atmosphere as a calorimeter. The H.E.S.S. Array entered Phase II in September 2012 with the inauguration of a fifth telescope that is larger and more complex than the other four. This paper will give an overview of the current H.E.S.S. central data acquisition (DAQ) system with particular emphasis on the upgrades made to integrate the fifth telescope into the array. At first, the various requirements for the central DAQ are discussed then the general design principles employed to fulfil these requirements are described. Finally, the performance, stability and reliability of the H.E.S.S. central DAQ are presented. One of the major accomplishments is that less than 0.8% of observation time has been lost due to central DAQ problems since 2009.Comment: 17 pages, 8 figures, published in Astroparticle Physic
    corecore