23 research outputs found

    A Novel Power Management For Cmp Systems In Data-Intensive Environment

    No full text
    The emerging data-intensive applications of today are comprised of non-uniform CPU and I/O intensive workloads, thus imposing a requirement to consider both CPU and I/O effects in the power management strategies. Only scaling down the processor\u27s frequency based on its busy/idle ratio cannot fully exploit opportunities of saving power. Our experiments show that besides the busy and idle status, each processor may also have I/O wait phases waiting for I/O operations to complete. During this period, the completion time is decided by the I/O subsystem rather than the CPU thus scaling the processor to a lower frequency will not affect the performance but save more power. In addition, the CPU\u27s reaction to the I/O operations may be significantly affected by several factors, such as I/O type (sync or unsync), instruction/job level parallelism, it cannot be accurately modeled via physics laws like mechanical or chemical systems. In this paper, we propose a novel power management scheme called MAR (modeless, adaptive, rule-based) in multiprocessor systems to minimize the CPU power consumption under performance constraints. By using richer feedback factors, e.g. the I/O wait, MAR is able to accurately describe the relationships among core frequencies, performance and power consumption. We adopt a modeless control model to reduce the complexity of system modeling. MAR is designed for CMP (Chip Multi Processor) systems by employing multi-input/multi-output (MIMO) theory and per core level DVFS (Dynamic Voltage and Frequency Scaling). Our extensive experiments on a physical test bed demonstrate that, for the SPEC benchmark and data-intensive (TPC-C) benchmark, the efficiency of MAR is 93.6-96.2% accurate to the ideal power saving strategy calculated off-line. Compared with baseline solutions, MAR could save 22.5-32.5% more power while keeping the comparable performance loss of about 1.8-2.9%. In addition, simulation results show the efficiency of our design for various CMP configurations. © 2011 IEEE

    A Novel Weighted-Graph-Based Grouping Algorithm for Metadata Prefetching

    Get PDF
    Although data prefetching algorithms have been extensively studied for years, there is no counterpart research done for metadata access performance. Existing data prefetching algorithms, either lack of emphasis on group prefetching, or bearing a high level of computational complexity, do not work well with metadata prefetching cases. Therefore, an efficient, accurate, and distributed metadata-oriented prefetching scheme is critical to leverage the overall performance in large distributed storage systems. In this paper, we present a novel weighted-graph-based prefetching technique, built on both direct and indirect successor relationship, to reap performance benefit from prefetching specifically for clustered metadata servers, an arrangement envisioned necessary for petabyte-scale distributed storage systems. Extensive trace-driven simulations show that by adopting our new metadata prefetching algorithm, the miss rate for metadata accesses on the client site can be effectively reduced, while the average response time of metadata operations can be dramatically cut by up to 67 percent, compared with legacy LRU caching algorithm and existing state-of-the-art prefetching algorithms

    A Scalable Reverse Lookup Scheme Using Group-Based Shifted Declustering Layout

    No full text
    Recent years have witnessed an increasing demand for super data clusters. The super data clusters have reached the petabyte-scale that can consist of thousands or tens of thousands storage nodes at a single site. For this architecture, reliability is becoming a great concern. In order to achieve a high reliability, data recovery and node reconstruction is a must. Although extensive research works have investigated how to sustain high performance and high reliability in case of node failures at large scale, a reverse lookup problem, namely finding the objects list for the failed node remains open. This is especially true for storage systems with high requirement of data integrity and availability, such as scientific research data clusters and etc. Existing solutions are either time consuming or expensive. Meanwhile, replication based block placement can be used to realize fast reverse lookup. However, they are designed for centralized, small-scale storage architectures. In this paper, we propose a fast and efficient reverse lookup scheme named Group-based Shifted Declustering (G-SD) layout that is able to locate the whole content of the failed node. G-SD extends our previous shifted declustering layout and applies to large-scale file systems. Our mathematical proofs and real-life experiments show that G-SD is a scalable reverse lookup scheme that is up to one order of magnitude faster than existing schemes. © 2011 IEEE

    Co-Located Compute And Binary File Storage In Data-Intensive Computing

    No full text
    With the rapid development of computation capability, the massive increase in data volume has outmoded compute-intensive clusters for HPC analysis of large-scale data sets due to a huge amount of data transfer over network. Co-located compute and storage has been introduced in dataintensive clusters to avoid network bottleneck by launching the computation on nodes in which most of the input data reside. Chunk-based storage systems are typical examples, splitting data into blocks and randomly storing them across nodes. Records as the input data for the analysis are read from blocks. This method implicitly assumes that a single record resides on a single node and then data transfer can be avoided. However, this assumption does not always hold because there is a gap between records and blocks. The current solution overlooks the relationship between the computation unit as a record and the storage unit as a block. For situations with records belonging to one block, there would be no data transfer. But in practice, one record could consist of several blocks. This is especially true for binary files, which introduce extra data transfer due to preparing the input data before conducting the analysis. Blocks belonging to a single record are scattered randomly across the data nodes regardless of to the semantics of the records. To address these problems, we develop two solutions in this paper, one is to develop a Record-Based Block Distribution (RBBD) framework and the other is a data-centric scheduling using a Weighted Set Cover Scheduling (WSCS) to schedule the tasks. The Record-Based Block Distribution (RBBD) framework for data-intensive analytics aims to eliminate the gap between records and blocks and accomplishes zero data transfer among nodes. The Weighted Set Cover Scheduling (WSCS) is proposed to further improve the performance by optimizing the combination of nodes. Our experiments show that overlooking the record and block relationship can cause severe performance problems when a record is comprised of several blocks scattered in different nodes. Our proposed novel data storage strategy, Record-Based Block Distribution (RBBD), optimizes the block distribution according to the record and block relationship. By being combined with our novel scheduling Weighted Set Cover Scheduling (WSCS), we efficiently reduces extra data transfers, and eventually improves the performance of the chunk-based storage system. Using our RBBD framework and WSCS in chunk-based storage system, our extensive experiments show that the data transfer decreases by 36.4% (average) and the scheduling algorithm outperforms the random algorithm by 51%-62%; the deviation from the ideal solutions is no more than 6.8%. © 2012 IEEE

    Draw: A New Data-Grouping-Aware Data Placement Scheme For Data Intensive Applications With Interest Locality

    No full text
    Recent years have seen an increasing number of scientists employ data parallel computing frameworks such as MapReduce and Hadoop to run data intensive applications and conduct analysis. In these co-located compute and storage frameworks, a wise data placement scheme can significantly improve the performance. Existing data parallel frameworks, e.g., Hadoop, or Hadoop-based clouds, distribute the data using a random placement method for simplicity and load balance. However, we observe that many data intensive applications exhibit interest locality which only sweep part of a big data set. The data often accessed together result from their grouping semantics. Without taking data grouping into consideration, the random placement does not perform well and is way below the efficiency of optimal data distribution. In this paper, we develop a new Data-gRouping-AWare (DRAW) data placement scheme to address the above-mentioned problem. DRAW dynamically scrutinizes data access from system log files. It extracts optimal data groupings and re-organizes data layouts to achieve the maximum parallelism per group subjective to load balance. By experimenting two real-world MapReduce applications with different data placement schemes on a 40-node test bed, we conclude that DRAW increases the total number of local map tasks executed up to 59.8%, reduces the completion latency of the map phase up to 41.7%, and improves the overall performance by 36.4%, in comparison with Hadoop\u27s default random placement. © 1965-2012 IEEE

    file1. Perpendicular to bedding

    No full text
    The table contains series of measurements for Megathrix longus in thin sections perpendicular to the bedding

    Mar: A Novel Power Management For Cmp Systems In Data-Intensive Environment

    No full text
    Emerging data-intensive applications are creating non-uniform CPU and I/O workloads which impose the requirement to consider both CPU and I/O effects in the power management strategies. Current approaches focus on scaling down the CPU frequency based on CPU busy/idle ratio without taking I/O into considertation. Therefore, they do not fully exploit the opportunities in power conservation. In this paper, we propose a novel power management scheme called model-free, adaptive, rule-based (MAR) in multiprocessor systems to minimize the CPU power consumption subject to performance constraints. By introducing new I/O wait status, MAR is able to accurately describe the relationship between core frequencies, performance and power consumption. Moreover, we adopt a model-free control method to filter out the I/O wait status from the traditional CPU busy/idle model in order to achieve fast responsiveness to burst situations and take full advantage of power saving. Our extensive experiments on a physical testbed demonstrate that, for SPEC benchmarks and data-intensive (TPC-C) benchmarks, an MAR prototype system achieves 95.8-97.8 percent accuracy of the ideal power saving strategy calculated offline. Compared with baseline solutions, MAR is able to save 12.3-16.1 percent more power while maintain a comparable performance loss of about 0.78-1.08 percent. In addition, more simulation results indicate that our design achieved 3.35-14.2 percent more power saving efficiency and 4.2-10.7 percent less performance loss under various CMP configurations as compared with various baseline approaches such as LAST, Relax, PID and MPC

    Concentric Layout, a New Scientific Data Distribution Scheme in Hadoop File System

    No full text
    The data generated by scientific simulation, sensor, monitor or optical telescope has increased with dramatic speed. In order to analyze the raw data fast and space efficiently, data pre-process operation is needed to achieve better performance in data analysis phase. Current research shows an increasing tread of adopting MapReduce framework for large scale data processing. However, the data access patterns which generally applied to scientific data set are not supported by current MapReduce framework directly. The gap between the requirement from analytics application and the property of MapReduce framework motivates us to provide support for these data access patterns in MapReduce framework. In our work, we studied the data access patterns in matrix files and proposed a new concentric data layout solution to facilitate matrix data access and analysis in MapReduce framework. Concentric data layout is a hierarchical data layout which maintains the dimensional property in large data sets. Contrary to the continuous data layout adopted in current Hadoop framework, concentric data layout stores the data from the same sub-matrix into one chunk, and then stores chunks symmetrically in a higher level. This matches well with the matrix like computation. The concentric data layout preprocesses the data beforehand, and optimizes the afterward run of MapReduce application. The experiments show that the concentric data layout improves the overall performance, reduces the execution time by about 38% when reading a 64 GB file. It also mitigates the unused data read overhead and increases the useful data efficiency by 32% on average. © 2010 IEEE
    corecore