164 research outputs found

    Bridging the Gap between Application and Solid-State-Drives

    Get PDF
    Data storage is one of the important and often critical parts of the computing system in terms of performance, cost, reliability, and energy. Numerous new memory technologies, such as NAND flash, phase change memory (PCM), magnetic RAM (STT-RAM) and Memristor, have emerged recently. Many of them have already entered the production system. Traditional storage optimization and caching algorithms are far from optimal because storage I/Os do not show simple locality. To provide optimal storage we need accurate predictions of I/O behavior. However, the workloads are increasingly dynamic and diverse, making the long and short time I/O prediction challenge. Because of the evolution of the storage technologies and the increasing diversity of workloads, the storage software is becoming more and more complex. For example, Flash Translation Layer (FTL) is added for NAND-flash based Solid State Disks (NAND-SSDs). However, it introduces overhead such as address translation delay and garbage collection costs. There are many recent studies aim to address the overhead. Unfortunately, there is no one-size-fits-all solution due to the variety of workloads. Despite rapidly evolving in storage technologies, the increasing heterogeneity and diversity in machines and workloads coupled with the continued data explosion exacerbate the gap between computing and storage speeds. In this dissertation, we improve the data storage performance from both top-down and bottom-up approach. First, we will investigate exposing the storage level parallelism so that applications can avoid I/O contentions and workloads skew when scheduling the jobs. Second, we will study how architecture aware task scheduling can improve the performance of the application when PCM based NVRAM are equipped. Third, we will develop an I/O correlation aware flash translation layer for NAND-flash based Solid State Disks. Fourth, we will build a DRAM-based correlation aware FTL emulator and study the performance in various filesystems

    Effective Use of SSDs in Database Systems

    Get PDF
    With the advent of solid state drives (SSDs), the storage industry has experienced a revolutionary improvement in I/O performance. Compared to traditional hard disk drives (HDDs), SSDs benefit from shorter I/O latency, better power efficiency, and cheaper random I/Os. Because of these superior properties, SSDs are gradually replacing HDDs. For decades, database management systems have been designed, architected, and optimized based on the performance characteristics of HDDs. In order to utilize the superior performance of SSDs, new methods should be developed, some database components should be redesigned, and architectural decisions should be revisited. In this thesis, novel methods are proposed to exploit the new capabilities of modern SSDs to improve the performance of database systems. The first is a new method for using SSDs as a fully persistent second level memory buffer pool. This method uses SSDs as a supplementary storage device to improve transactional throughput and to reduce the checkpoint and recovery times. A prototype of the proposed method is compared with its closest existing competitor. The second considers the impact of the parallel I/O capability of modern SSDs on the database query optimizer. It is shown that a query optimizer that is unaware of the parallel I/O capability of SSDs can make significantly sub-optimal decisions. In addition, a practical method for making the query optimizer parallel-I/O-aware is introduced and evaluated empirically. The third technique is an SSD-friendly external merge sort. This sorting technique has better performance than other common external sorting techniques. It also improves the SSD's lifespan by reducing the number of write operations required during sorting

    Feasibility study on the microeconomic impact of enforcement of competition policies on innovation: final report

    Get PDF
    Following seminal contributions from two of the giants of 20th century economics, Schumpeter and Arrow, the relationship between competition and innovation has long been hotly debated, but the general consensus is that competition, whether for the market or in the market, is an important stimulus to innovation. This provides an important additional justification for competition policy, beyond the static purely price-based perspective. Remarkably however, we know relatively little about how specific competition policy interventions have impacted on firms’ innovation activities. So whilst the impact evaluation literature has made important strides in recent decades in assessing the static gains which have been driven by anti-trust and merger control, there have only been very few studies evaluating the impacts of individual policy decisions in this area. The main objective of this study is to explore whether, and how far, such impact evaluation exercises are feasible for competition and innovation. For this reason DG COMP commissioned a team of academics led by Peter Ormosi at the Centre for Competition Policy, University of East Anglia, to review the existing literature, and to propose a rigorous analytical and methodological framework which can be used to evaluate cases. As an illustration of this framework in action, the study provides a pilot evaluation of the Seagate/Samsung and Western Digital/Hitachi mergers. The findings of this case study prove to be interesting in their own right – shedding some new light on these important mergers. But far more important for present purposes, it establishes that the methodology is viable, albeit with important lessons to be learnt. The objective of this study was to offer a detailed literature review, develop a methodological framework, collect data on three different areas (R&D spending, patents, and product characteristics), and analyse it. Our task was to identify what is feasible, what we can learn in terms of the applied methodology, and also to provide preliminary results on how innovation was affected by the 2012 consolidation of the HDD market
    • …
    corecore