238 research outputs found

    Simulation of hierarchical storage systems for TCO and QoS

    Get PDF
    Due to the variety of storage technologies deep storage hierarchies turn out to be the most feasible choice to meet performance and cost requirements when handling vast amounts of data. Long-term archives employed by scientific users are mainly reliant on tape storage, as it remains the most cost-efficient option. Archival systems are often loosely integrated into the HPC storage infrastructure. In expectation of exascale systems and in situ analysis also burst buffers will require integration with the archive. Exploring new strategies and developing open software for tape systems is a hurdle due to the lack of affordable storage silos and availability outside of large organizations and due to increased wariness requirements when dealing with ultra-durable data. Lessening these problems by providing virtual storage silos should enable community-driven innovation and enable site operators to add features where they see fit while being able to verify strategies before deploying on production systems. Different models for the individual components in tape systems are developed. The models are then implemented in a prototype simulation using discrete event simulation. The work shows that the simulations can be used to approximate the behavior of tape systems deployed in the real world and to conduct experiments without requiring a physical tape system

    Planning and managing the cost of compromise for AV retention and access

    No full text
    Long-term retention and access to audiovisual (AV) assets as part of a preservation strategy inevitably involve some form of compromise in order to achieve acceptable levels of cost, throughput, quality, and many other parameters. Examples include quality control and throughput in media transfer chains; data safety and accessibility in digital storage systems; and service levels for ingest and access for archive functions delivered as services. We present new software tools and frameworks developed in the PrestoPRIME project that allow these compromises to be quantitatively assessed, planned, and managed for file-based AV assets. Our focus is how to give an archive an assurance that when they design and operate a preservation strategy as a set of services, it will function as expected and will cope with the inevitable and often unpredictable variations that happen in operation. This includes being able to do cost projections, sensitivity analysis, simulation of “disaster scenarios,” and to govern preservation services using service-level agreements and policies

    Simulation of Automated File Migratoon in Information Lifecycle Management

    Get PDF
    Information Lifecycle Management (ILM) is a strategic concept for storage of information and documents. ILM is based on the idea that in an enterprise information have different values. Information with different values are stored on different storage hierarchies. ILM offers significant potential cost savings by tiering storage and 90% of decision makers consider implementing ILM (Linden 2006). Nonetheless, there are too few experience reports and experimenting and researching in real systems are too expensive. This paper addresses this issue and contributes to supporting and assisting IT managers in their decision-making process. ILM automation needs migration rules. There are well-known static, heuristic migration rules and we present a new dynamic migration rule for ILM. These migration rules are implemented in an ILM simulator. We compare the performance of the new dynamic rule with the heuristics. The simulative approach has two advantages. On the one hand it offers predictions about the dynamic behaviour of an ILM migration rule and, on the other hand, it dispenses with real storage hardware. Simulation leads to decisions under certainty. When making a decision under certainty, the major problem is to determine which is the trade-off among different objectives. Cost-benefit analysis can be used to this purpose. A decision matrix is laid where rows represent choices and columns represent states of nature. The simulated results support the choice of migration rules and help to avoid mismanagement and poor investments in advance. The results raise the awareness of choosing the best alternative

    Modeling Information Lifecycle Management

    Get PDF

    AN EVALUATION OF LEGACY 3-TIER DATACENTER NETWORKS FOR ENTERPRISE COMPUTING USING MATHEMATICAL INDUCTION ALGORITHM

    Get PDF
    In today’s internet computing, regardless of the scale of infrastructural integrations, the design Cost, QoS, powermanagement etc, largely plays a role in the choice of design. In this paper, we present the limitations of traditional DataCenter Networks (DCN) for efficient web application integration in enterprise organisations. We carried out an in-depthstudy on typical enterprise DCNs viz: University of Nigeria DCN and Swift Network DCN Lagos state, seeking to ascertainthe limitations of the traditional DCN with respect to throughput, latency, scalability, efficiency in web applicationintegration, etc in QoS context. Microtic Server and Ethereal Wireshack were employed for traffic trend observation andpacket captures on a monitoring Dell Inspiron laptop connected to the UNN DCN. The traffic graphs were captured,computed and analysed. From the results obtained, deductions were derived while articulating on the limitations of thesenetworks. Using mathematical induction theorem, we show that for any introduced network enhancer, this will enable sucha network to scale optimally. In this regard, this work opines that for large scale enterprise computing, collapsing a three tiernetwork models into a low cost two-tier model using virtualization and consolidation will be widely celebrated. These formsthe basis for our future work on a re-engineered DCN for enterprise web application integrations.Keywords: Internet, Computing, Efficiency, Application, Ethereal, Wireshack, Enterpris

    Towards a Management Paradigm with a Constrained Benchmark for Autonomic Communications

    Full text link
    This paper describes a management paradigm to give effect to autonomic activation, monitoring and control of services or products in the future converged telecommunications networks. It suggests an architecture that places the various management functions into a structure that can then be used to select those functions which may yield to autonomic management, as well as guiding the design of the algorithms. The validation of this architecture, with particular focus on service configuration, is done via a genetic algorithm -- Population Based Incremental Learning (PBIL). Even with this centralized adaptation strategy, the simulation results show that the proposed architecture and benchmark can be applied to this constrained benchmark, produces effective convergence performance in terms of finding nearly optimal configurations under multiple constraints

    Architecting Data Centers for High Efficiency and Low Latency

    Full text link
    Modern data centers, housing remarkably powerful computational capacity, are built in massive scales and consume a huge amount of energy. The energy consumption of data centers has mushroomed from virtually nothing to about three percent of the global electricity supply in the last decade, and will continuously grow. Unfortunately, a significant fraction of this energy consumption is wasted due to the inefficiency of current data center architectures, and one of the key reasons behind this inefficiency is the stringent response latency requirements of the user-facing services hosted in these data centers such as web search and social networks. To deliver such low response latency, data center operators often have to overprovision resources to handle high peaks in user load and unexpected load spikes, resulting in low efficiency. This dissertation investigates data center architecture designs that reconcile high system efficiency and low response latency. To increase the efficiency, we propose techniques that understand both microarchitectural-level resource sharing and system-level resource usage dynamics to enable highly efficient co-locations of latency-critical services and low-priority batch workloads. We investigate the resource sharing on real-system simultaneous multithreading (SMT) processors to enable SMT co-locations by precisely predicting the performance interference. We then leverage historical resource usage patterns to further optimize the task scheduling algorithm and data placement policy to improve the efficiency of workload co-locations. Moreover, we introduce methodologies to better manage the response latency by automatically attributing the source of tail latency to low-level architectural and system configurations in both offline load testing environment and online production environment. We design and develop a response latency evaluation framework at microsecond-level precision for data center applications, with which we construct statistical inference procedures to attribute the source of tail latency. Finally, we present an approach that proactively enacts carefully designed causal inference micro-experiments to diagnose the root causes of response latency anomalies, and automatically correct them to reduce the response latency.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144144/1/yunqi_1.pd

    Optical networking special issue based on selected papers of IEEE ANTS 2015

    Get PDF
    In Priority-based content processing with Q-routing in Information Centric Networking (ICN), Sibendu Paul, Bitan Banerjee, Amitava Mukherjee and Mrinal K. Naskar address content management issue in a cache with finite storage capability in ICN by proposing an efficient content management policy that changes a router to a self-sustained cache. A novel algorithm based on Q-routing is proposed to determine the order of service for content packets in the buffer of a cache and find next node toward the destination with minimum propagation delay
    corecore