236 research outputs found

    SAP HANA Platform

    Get PDF
    Tato práce pojednává o databázi pracující v paměti nazývané SAP HANA. Detailně popisuje architekturu a nové technologie, které tato databáze využívá. V další části se zabývá porovnáním rychlosti provedení vkládání a vybírání záznamů z databáze se stávající používanou relační databází MaxDB. Pro účely tohoto testování jsem vytvořil jednoduchou aplikaci v jazyce ABAP, která umožňuje testy provádět a zobrazuje jejich výsledky. Ty jsou shrnuty v poslední kapitole a ukazují SAP HANA jako jednoznačně rychlejší ve vybírání dat, avšak srovnatelnou, či pomalejší při vkládání dat do databáze. Přínos mé práce vidím v shrnutí podstatných změn, které s sebou data uložená v paměti přináší a názorné srovnání rychlosti provedení základních typů dotazů.This thesis discusses the in-memory database called SAP HANA. It describes in detail the architecture and new technologies used in this type of database. The next section presents a comparison of speed of the inserting and selecting data from the database with existing relational database MaxDB. For the purposes of this testing I created a simple application in ABAP language, which allows user to perform and display their results. These are summarized in the last chapter and demonstrate SAP HANA as clearly faster during selection of data, but comparable, or slower when inserting data into the database. I see contribution of my work in the summary of significant changes that come with data stored in the main memory and brings comparison of speed of basic types of queries.

    In-Memory and Column Storage Changes IS Curriculum

    Get PDF
    Random Access Memory (RAM) prices have been dropping precipitously. This has given rise to the possibility of keeping all data gathered in RAM rather than utilizing disk storage. This technological capability, along with benefits associated with a columnar storage database system, reduces the benefit of Relational Database Management Systems (RDBMS) and eliminates the need for Online Transactional Processing (OLTP) and Online Analytical Processing (OLAP) activities to remain separate. The RDBMS was a required data structure due to the need to separate the daily OLTP activities from the OLAP analysis of that data. In-memory processing allows both activities simultaneously. Data analysis can be done at the speed of data capture. Relational databases are not the only option for organizations. In-Memory is emerging, and university curriculum needs to innovate and create skills associated with denormalization of existing database (legacy) systems to prepare for the next generation of data managers

    V+H: Architecture to Manage DSS and OLTP Workloads

    Get PDF
    In the last few years research has been done in order to define the best approach that DBMSs must follow to manage different workloads. Some approaches have followed the “One size fits all” trying to incorporate all features in a row-oriented DBMS (also called horizontal) to manage both OLTP and DSS workloads. Additionally, there have been specialized DBMS following a columnar approach (also called vertical) that focuses on the growing demand to efficiently manage DSS workloads. The present paper is aimed to propose a combination of both vertical and horizontal DBMS to best manage OLTP and DSS workloads. We have used mature, commercially available products from a single vendor and developed a custom middleware Decision Query Module that identifies the best option for most efficient execution of a query. This V+H architecture also offers the functionality of a mirrored DB without paying twice of the storage

    Growth of relational model: Interdependence and complementary to big data

    Get PDF
    A database management system is a constant application of science that provides a platform for the creation, movement, and use of voluminous data. The area has witnessed a series of developments and technological advancements from its conventional structured database to the recent buzzword, bigdata. This paper aims to provide a complete model of a relational database that is still being widely used because of its well known ACID properties namely, atomicity, consistency, integrity and durability. Specifically, the objective of this paper is to highlight the adoption of relational model approaches by bigdata techniques. Towards addressing the reason for this in corporation, this paper qualitatively studied the advancements done over a while on the relational data model. First, the variations in the data storage layout are illustrated based on the needs of the application. Second, quick data retrieval techniques like indexing, query processing and concurrency control methods are revealed. The paper provides vital insights to appraise the efficiency of the structured database in the unstructured environment, particularly when both consistency and scalability become an issue in the working of the hybrid transactional and analytical database management system

    Cache Conscious Data Layouting for In-Memory Databases

    Get PDF
    Many applications with manually implemented data management exhibit a data storage pattern in which semantically related data items are stored closer in memory than unrelated data items. The strong sematic relationship between these data items commonly induces contemporary accesses to them. This is called the principle of data locality and has been recognized by hardware vendors. It is commonly exploited to improve the performance of hardware. General Purpose Database Management Systems (DBMSs), whose main goal is to simplify optimal data storage and processing, generally fall short of this claim because the usage pattern of the stored data cannot be anticipated when designing the system. The current interest in column oriented databases indicates that one strategy does not fit all applications. A DBMS that automatically adapts it’s storage strategy to the workload of the database promises a significant performance increase by maximizing the benefit of hardware optimizations that are based on the principle of data locality. This thesis gives an overview of optimizations that are based on the principle of data locality and the effect they have on the data access performance of applications. Based on the findings, a model is introduced that allows an estimation of the costs of data accesses based on the arrangement of the data in the main memory. This model is evaluated through a series of experiments and incorporated into an automatic layouting component for a DBMS. This layouting component allows the calculation of an analytically optimal storage layout. The performance benefits brought by this component are evaluated in an application benchmark

    How the High Performance Analytics Work with SAP HANA

    Get PDF
    Informed decision-making, better communication and faster response to business situation are the key differences between leaders and followers in this competitive global marketplace. A data-driven organization can analyze patterns & anomalies to make sense of the current situation and be ready for future opportunities. Organizations no longer have the problem of “lack of data”, but the problem of “actionable data” at the right time to act, direct and influence their business decisions. The data exists in different transactional systems and/or data warehouse systems, which takes significant time to retrieve/ process relevant information and negatively impacts the time window to out-maneuver the competition. To solve the problem of “actionable data”, enterprises can take advantage of the SAP HANA [1] in-memory platform that enables rapid processing and analysis of huge volumes of data in real-time. This paper discusses how SAP HANA virtual data models can be used for on-the-fly analysis of live transactional data to derive insight, perform what-if analysis and execute business transactions in real-time without using persisted aggregates

    Differential buffer for a relational column store in-memory database

    Get PDF
    At the present the financial and the analytical reporting has taken importance over the operational reporting. The main difference is that operational reporting focus on day-to-day operations and requires data on the detail of transactions, while the financial and analytical reporting focus on long term operations and uses multiple transactions. That situation, added to the hardware evolution, have determined the relational databases over the time. One of the different approaches is the actual SAP HANA database. This database focus on the financial and the analytical reporting without the use of the Data Warehouses. But it also provides good capabilities for the operational reporting. That was achieve through the use of a column store structure in main memory. But in order to prepare the data in the database, it holds up the insertion performance. This document studies the possibility to use a buffer in a prototype based in the SAP HANA database architecture, with the goal of improve that performance. In order to compare the impact in the system of the addition of a buffer, multiple approaches has been implemented, tested and carefully compared to each other and also to the original prototype.Grado en Ingeniería Informátic
    corecore