544 research outputs found

    Replica maintenance strategy for data grid

    Get PDF
    Data Grid is an infrastructure that manages huge amount of data files, and provides intensive computational resources across geographically distributed collaboration.Increasing the performance of such system can be achieved by improving the overall resource usage, which includes network and storage resources.Improving network resource usage is achieved by good utilization of network bandwidth that is considered as an important factor affecting job execution time.Meanwhile, improving storage resource usage is achieved by good utilization of storage space usage. Data replication is one of the methods used to improve the performance of data access in distributed systems by replicating multiple copies of data files in the distributed sites.Having distributed the replicas to various locations, they need to be monitored.As a result of dynamic changes in the data grid environment, some of the replicas need to be relocated.In this paper we proposed a maintenance replica placement strategy termed as Unwanted Replica Deletion Strategy (URDS) as a part of Replica maintenance service.The main purpose of the proposed strategy is to find the placement of unwanted replicas to be deleted.OptorSim is used to evaluate the performance of the proposed strategy. The simulation results show that URDS requires less execution time and consumes less network usage and has a best utilization of storage space usage compared to existing approaches

    Replica Creation Algorithm for Data Grids

    Get PDF
    Data grid system is a data management infrastructure that facilitates reliable access and sharing of large amount of data, storage resources, and data transfer services that can be scaled across distributed locations. This thesis presents a new replication algorithm that improves data access performance in data grids by distributing relevant data copies around the grid. The new Data Replica Creation Algorithm (DRCM) improves performance of data grid systems by reducing job execution time and making the best use of data grid resources (network bandwidth and storage space). Current algorithms focus on number of accesses in deciding which file to replicate and where to place them, which ignores resources’ capabilities. DRCM differs by considering both user and resource perspectives; strategically placing replicas at locations that provide the lowest transfer cost. The proposed algorithm uses three strategies: Replica Creation and Deletion Strategy (RCDS), Replica Placement Strategy (RPS), and Replica Replacement Strategy (RRS). DRCM was evaluated using network simulation (OptorSim) based on selected performance metrics (mean job execution time, efficient network usage, average storage usage, and computing element usage), scenarios, and topologies. Results revealed better job execution time with lower resource consumption than existing approaches. This research contributes replication strategies embodied in one algorithm that enhances data grid performance, capable of making a decision on creating or deleting more than one file during same decision. Furthermore, dependency-level-between-files criterion was utilized and integrated with the exponential growth/decay model to give an accurate file evaluation

    Master/worker parallel discrete event simulation

    Get PDF
    The execution of parallel discrete event simulation across metacomputing infrastructures is examined. A master/worker architecture for parallel discrete event simulation is proposed providing robust executions under a dynamic set of services with system-level support for fault tolerance, semi-automated client-directed load balancing, portability across heterogeneous machines, and the ability to run codes on idle or time-sharing clients without significant interaction by users. Research questions and challenges associated with issues and limitations with the work distribution paradigm, targeted computational domain, performance metrics, and the intended class of applications to be used in this context are analyzed and discussed. A portable web services approach to master/worker parallel discrete event simulation is proposed and evaluated with subsequent optimizations to increase the efficiency of large-scale simulation execution through distributed master service design and intrinsic overhead reduction. New techniques for addressing challenges associated with optimistic parallel discrete event simulation across metacomputing such as rollbacks and message unsending with an inherently different computation paradigm utilizing master services and time windows are proposed and examined. Results indicate that a master/worker approach utilizing loosely coupled resources is a viable means for high throughput parallel discrete event simulation by enhancing existing computational capacity or providing alternate execution capability for less time-critical codes.Ph.D.Committee Chair: Fujimoto, Richard; Committee Member: Bader, David; Committee Member: Perumalla, Kalyan; Committee Member: Riley, George; Committee Member: Vuduc, Richar

    Transaction-filtering data mining and a predictive model for intelligent data management

    Get PDF
    This thesis, first of all, proposes a new data mining paradigm (transaction-filtering association rule mining) addressing a time consumption issue caused by the repeated scans of original transaction databases in conventional associate rule mining algorithms. An in-memory transaction filter is designed to discard those infrequent items in the pruning steps. This filter is a data structure to be updated at the end of each iteration. The results based on an IBM benchmark show that an execution time reduction of 10% - 19% is achieved compared with the base case. Next, a data mining-based predictive model is then established contributing to intelligent data management within the context of Centre for Grid Computing. The capability of discovering unseen rules, patterns and correlations enables data mining techniques favourable in areas where massive amounts of data are generated. The past behaviours of two typical scenarios (network file systems and Data Grids) have been analyzed to build the model. The future popularity of files can be forecasted with an accuracy of 90% by deploying the above predictor based on the given real system traces. A further step towards intelligent policy design is achieved by analyzing the prediction results of files’ future popularity. The real system trace-based simulations have shown improvements of 2-4 times in terms of data response time in network file system scenario and 24% mean job time reduction in Data Grids compared with conventional cases.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Performance Modeling of Inline Compression With Software Caching for Reducing the Memory Footprint in PYSDC

    Get PDF
    Modern HPC applications compute and analyze massive amounts of data. The data volume is growing faster than memory capabilities and storage improvements leading to performance bottlenecks. An example of this is pySDC, a framework for solving collocation problems iteratively using parallel-in-time methods. These methods require storing and exchanging 3D volume data for each parallel point in time. If a simulation consists of M parallel-in-time stages, where the full spatial problem has to be stored for the next iteration, the memory demand for a single state variable is M ×Nx ×Ny ×Nz per time-step. For an application simulation with many state variables or stages, the memory requirement is considerable. Data compression helps alleviate the overhead in memory by reducing the size of data and keeping it in compressed format. Inline compression compresses and decompresses the application’s working set as it moves in and out of main memory. Thus, it provides the system with the appearance of more main memory. Naive compressed arrays require a compression or decompression operation for each store or load and therefore hurt the performance of the application. By incorporating a software cache and storing decompressed values of the array, we limit the number of compression and decompression operations for the stores and loads, thereby improving performance overall. In this thesis, we build a compression manager and software cache manager for the pySDC framework to reduce the memory requirements and computational overhead. The compression manager wraps around LibPressio, a C++ compression library that abstracts all compressors. We utilize blosc, a lossless compressor for our compression manager, and build a software cache manager with various cache configurations and cache policies to work in cohesion with the compression manager. We build a performance model which evaluates the compression manager and cache manager’s performance on different metrics such as compression ratio and compression/decompression time. We test our framework on two different pySDC applications — e.g., Allen-Cahn and Heat-diffusion. ii Results show that incorporating compression and increasing the cache size for our applications inflates the total compressed size in bytes for the arrays and therefore reduces the compression ratio, in contrast to our expectations. However, incorporating the cache and a greater cache size reduces the number of compression/decompression calls to LibPressio as well as cache evictions, significantly reducing the computational overhead for pySDC. Thus, overall, our compression and cache manager help reduce the memory footprint in pySDC. Future work involves looking at improving the compression ratio and using lossy compression to achieve significant reduction in memory footprint

    Knapsack-model-based Schemes and WLB Algorithm for the Capacity and Efficiency Management of Web Cache

    Get PDF
    Web cache refers to the temporary storage of web files/documents. In reality, a set of caches can be grouped into a cluster to improve the server system's performance. In this paper, to achieve the overall cluster efficiency, we propose a weighted load balancing (WLB) routing algorithm by considering both the cache capability and the content property to determine how to direct an arrival request to the right node. Based on Knapsack models, we characterize three new placement/replacement schemes for Web contents caching and then conduct the comparison based on WLB algorithm. We also compare WLB algorithm with two other widely used algorithms: Pure load balancing (PLB) algorithm and Round-Robin (RR) algorithm. Extensive simulation results show that the WLB algorithm works well under the examined cluster content placement/replacement schemes. It generally results in shorter response time and higher cache hit ratio, especially when the cache cluster capacity is scarce
    • …
    corecore