160,560 research outputs found

    Virtual Inertia Emulation to Improve Dynamic Frequency Stability of Low Inertia Microgrids

    Get PDF
    Due to low inertia and the intermittent nature of photovoltaic systems, dynamic frequency stability issues arise in microgrids with large photovoltaic systems. This limits the maximum amount of photovoltaic systems that can be penetrated in the microgrid. In order to increase the penetration of photovoltaic systems, the dynamic frequency controller, that is faster than the primary frequency controller (governor control) needs to be added in the microgrid system. For dynamic frequency control, inertial response can be provided from the energy storage system (such as battery, ultra-capacitor, photovoltaic system, etc.), which is termed as virtual inertia. A virtual inertia can be defined as the combination of an energy storage system, a power electronics converter and a proper control algorithm that improves the dynamic frequency stability of the microgrid. A virtual inertia supplies or absorbs the active power to and from the energy storage system to improve the dynamic frequency stability. This thesis presents the design and implementation of a hardware prototype of 1 kW virtual inertia in a microgrid with a real diesel generator and a load. For a step change in load, the virtual inertia improved the frequency response of the system from 57.39 Hz to 58.03 Hz. This improvement in frequency response proves the concept of existing proportional derivative based virtual inertia experimentally. With the addition of virtual inertia, the frequency of the system returns to the nominal frequency slower. Once the primary controller (governor control) of the system takes the action to regulate the frequency, virtual inertia no longer needs to add inertia to the system. So the dynamics of the VI needs to be improved so that the frequency returns to nominal frequency faster. This thesis also proposes an online learning controller based virtual inertia using adaptive dynamic programming that learns online and improves the dynamics of the controller of existing VI. The output of this controller supplements the output of the existing proportional derivative controller of virtual inertia. The supplementary controller is trained to increase the dynamics of the outer controller and to bring the system frequency to nominal frequency faster. Due to faster dynamics, the net energy delivered by the VI can be reduced significantly and improve the total possible discharge cycles from the battery. For performance evaluation, the proposed controller was implemented in a microgrid with a photovoltaic system, a diesel generator and a variable load. With the proposed controller, the frequency of the system returned to nominal frequency faster. The net energy delivered by the proposed controller in a photovoltaic diesel generator microgrid was 46.14% of the net energy delivered by the existing virtual inertia. Due to the decrement in the total energy delivered, the total number of possible battery discharge cycles with ADP based VI was 2.17 times of the total number of possible battery discharge cycles from VI

    BORG: Block-reORGanization and Self-optimization in Storage Systems

    Get PDF
    This paper presents the design, implementation, and evaluation of BORG, a self-optimizing storage system that performs automatic block reorganization based on the observed I/O workload. BORG is motivated by three characteristics of I/O workloads: non-uniform access frequency distribution, temporal locality, and partial determinism in non-sequential accesses. To achieve its objective, BORG manages a small, dedicated partition on the disk drive, with the goal of servicing a majority of the I/O requests from within this partition with significantly reduced seek and rotational delays. BORG is transparent to the rest of the storage stack, including applications, file system(s), and I/O schedulers, thereby requiring no or minimal modification to storage stack implementations. We evaluated a Linux implementation of BORG using several real-world workloads, including individual user desktop environments, a web-server, a virtual machine monitor, and an SVN server. These experiments comprehensively demonstrate BORG’s effectiveness in improving I/O performance and its incurred resource overhead

    TransCom: a virtual disk-based cloud computing platform for heterogeneous services

    Get PDF
    PublishedJournal ArticleThis paper presents the design, implementation, and evaluation of TransCom, a virtual disk (Vdisk) based cloud computing platform that supports heterogeneous services of operating systems (OSes) and their applications in enterprise environments. In TransCom, clients store all data and software, including OS and application software, on Vdisks that correspond to disk images located on centralized servers, while computing tasks are carried out by the clients. Users can choose to boot any client for using the desired OS, including Windows, and access software and data services from Vdisks as usual without consideration of any other tasks, such as installation, maintenance, and management. By centralizing storage yet distributing computing tasks, TransCom can greatly reduce the potential system maintenance and management costs. We have implemented a multi-platform TransCom prototype that supports both Windows and Linux services. The extensive evaluation based on both test-bed experiments and real-usage experiments has demonstrated that TransCom is a feasible, scalable, and efficient solution for successful real-world use. © 2004-2012 IEEE

    HVSTO: Efficient Privacy Preserving Hybrid Storage in Cloud Data Center

    Full text link
    In cloud data center, shared storage with good management is a main structure used for the storage of virtual machines (VM). In this paper, we proposed Hybrid VM storage (HVSTO), a privacy preserving shared storage system designed for the virtual machine storage in large-scale cloud data center. Unlike traditional shared storage, HVSTO adopts a distributed structure to preserve privacy of virtual machines, which are a threat in traditional centralized structure. To improve the performance of I/O latency in this distributed structure, we use a hybrid system to combine solid state disk and distributed storage. From the evaluation of our demonstration system, HVSTO provides a scalable and sufficient throughput for the platform as a service infrastructure.Comment: 7 pages, 8 figures, in proceeding of The Second International Workshop on Security and Privacy in Big Data (BigSecurity 2014

    CORE: Augmenting Regenerating-Coding-Based Recovery for Single and Concurrent Failures in Distributed Storage Systems

    Full text link
    Data availability is critical in distributed storage systems, especially when node failures are prevalent in real life. A key requirement is to minimize the amount of data transferred among nodes when recovering the lost or unavailable data of failed nodes. This paper explores recovery solutions based on regenerating codes, which are shown to provide fault-tolerant storage and minimum recovery bandwidth. Existing optimal regenerating codes are designed for single node failures. We build a system called CORE, which augments existing optimal regenerating codes to support a general number of failures including single and concurrent failures. We theoretically show that CORE achieves the minimum possible recovery bandwidth for most cases. We implement CORE and evaluate our prototype atop a Hadoop HDFS cluster testbed with up to 20 storage nodes. We demonstrate that our CORE prototype conforms to our theoretical findings and achieves recovery bandwidth saving when compared to the conventional recovery approach based on erasure codes.Comment: 25 page

    Checkpointing as a Service in Heterogeneous Cloud Environments

    Get PDF
    A non-invasive, cloud-agnostic approach is demonstrated for extending existing cloud platforms to include checkpoint-restart capability. Most cloud platforms currently rely on each application to provide its own fault tolerance. A uniform mechanism within the cloud itself serves two purposes: (a) direct support for long-running jobs, which would otherwise require a custom fault-tolerant mechanism for each application; and (b) the administrative capability to manage an over-subscribed cloud by temporarily swapping out jobs when higher priority jobs arrive. An advantage of this uniform approach is that it also supports parallel and distributed computations, over both TCP and InfiniBand, thus allowing traditional HPC applications to take advantage of an existing cloud infrastructure. Additionally, an integrated health-monitoring mechanism detects when long-running jobs either fail or incur exceptionally low performance, perhaps due to resource starvation, and proactively suspends the job. The cloud-agnostic feature is demonstrated by applying the implementation to two very different cloud platforms: Snooze and OpenStack. The use of a cloud-agnostic architecture also enables, for the first time, migration of applications from one cloud platform to another.Comment: 20 pages, 11 figures, appears in CCGrid, 201

    OpenPING: A Reflective Middleware for the Construction of Adaptive Networked Game Applications

    Get PDF
    The emergence of distributed Virtual Reality (VR) applications that run over the Internet has presented networked game application designers with new challenges. In an environment where the public internet streams multimedia data and is constantly under pressure to deliver over widely heterogeneous user-platforms, there has been a growing need that distributed VR applications be aware of and adapt to frequent variations in their context of execution. In this paper, we argue that in contrast to research efforts targeted at improvement of scalability, persistence and responsiveness capabilities, much less attempts have been aimed at addressing the flexibility, maintainability and extensibility requirements in contemporary distributed VR platforms. We propose the use of structural reflection as an approach that not only addresses these requirements but also offers added value in the form of providing a framework for scalability, persistence and responsiveness that is itself flexible, maintainable and extensible. We also present an adaptive middleware platform implementation called OpenPING1 that supports our proposal in addressing these requirements

    ArrayBridge: Interweaving declarative array processing with high-performance computing

    Full text link
    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aims to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.Comment: 12 pages, 13 figure
    • 

    corecore